The world of Machine Learning (ML) loves the money phrase- "Zero to Hero". You know how the story ends, "if I were to collect a penny for reading a blog or watching a YouTube video on ML without the currency title, I'd retire a poor man…", but I digress.

Recently, I had the privilege of being part of a TensorFlow Developer Certification study plan. Following a few weeks of concentrated preparation, mustering the courage to sign-up (with prodding from our group leader!) and enduring the 5-hour exam, I finally had the prize in my possession that honestly felt elusive at times during the preparation. It was gratifying to read the validating words:

TensorFlow Certification BadgeTensorFlow Certification Badge
TensorFlow Certification Badge

Congratulations! You passed the TensorFlow Developer Certificate exam, and you are now a TensorFlow Certificate Holder! We recognize the hard work and dedication it takes to reach this level, and we look forward to supporting you in the next steps on your journey as a TensorFlow developer.

A little background. When I started my Machine Learning journey, anchored on the motivation of passing the TensorFlow Developer Certification, truth be told, it felt a little presumptuous. The notion was not unfounded, since the evidence followed rather quickly. The famous Laurence Moroney (more on him later) has a popular YouTube series titled "ML Zero to Hero". Being relatively new to Python, not knowing what a Tensor was and with a layman's definition of a model, the first few seconds of the video literally had me at Zero. Not a huge deal I thought because it basically meant I am going to start at Week 0 instead of Week 2. However, in spite of having a well-defined week by week pretty thorough preparation plan, I struggled to get the much needed initial grip on the learning material. Researching on the web although seems like a good idea always, being a newbie to ML, I had no idea how to separate the wheat from the chaff in the vast world of Machine Learning filled with both pure and polluted content. That costed me precious time, and the experience was demotivating.

Eventually I figured out what and whom to trust (the answer was right in front of me), what's noise vs. relevant (TensorFlow.org is the source of truth) and know how to code and organize datasets and different model architectures (from Regression to Time Series) on Google Colab and PyCharm (the IDE used in the exam) to fast track my progress.

I've tried to compile the details of my experience over a period of 12 weeks. Hopefully, it helps you in your journey, in some form or manner, to find the most logical and efficient path to learn, avoid pitfalls, save time and even propel you to take the exam and pass in the first attempt.

Let's begin!

The MVP Steps

I didn't start out in an organized way like the sequence below, but after putting all the thoughts and notes together; it turned out to be an accurate reflection of the steps taken during the preparation. If I were to retake the exam (officially a renewal is due in 36 months), this MVP — Minimum Viable Preparation — would serve as a model template.

  1. Tensorflow Developer Certification Study Plan
  2. TensorFlow Developer Certificate Exam Prep Video Course
  3. Machine Learning and Deep Learning Books
  4. YouTube Series on Machine Learning (TensorFlow channel)
  5. Tensorflow.org Tutorials (official documentation site)
  6. Coding on PyCharm and Google Colab
  7. Pre-Exam Organization
  8. Fun Along the Way
  9. Exam Day!

Tensorflow Developer Certification Study Plan

Study Plan Reference- ak-org/TFDevCert

For all practical purposes, the above GitHub link reference served as The Exam Primer, the only preparation plan I solely relied on. It is quite comprehensive and includes everything required to pass the exam. Quick highlights of my experience using it:

  1. This has a week by week study plan for a total of 10 weeks. 8 weeks "Study Plan for Experienced" and 2 weeks "Study Plan for Beginners". As I've already mentioned, having calibrated myself on the Zero to Hero scale, my starting point and the 10 weeks' duration were pretty well set.
  2. I reviewed the Certificate Information page under Week-0 and downloaded the essential Certification Handbook PDF. Although I didn't have to digest it fully in the beginning, I did gloss over the TOC, read the Exam Details section carefully, particularly paying attention to the FIVE areas (Regression, DNN, CNN, NLP and Time Series) under the skills checklist — the substance of the exam.
  3. Staying current on every week's goal was a challenge, but even after carving out some dedicated time for studying and coding, my beginning was wobbly. However, I kept the pressure on myself by making it a point to join the weekly meeting, actively participate in the discussions and seek help as needed.
  4. While working on the weekly assignments involving Colab examples, eventually I learned how to balance what needs to be done now vs. later. For me, it was more effective to go from the beginning to all the way to the end in order get fully familiar with the entirety of the scope of the notebook than getting caught up with the details of every code snippet and researching every new term and concept then and there. Instead, it made more sense to revisit and reinforce later when the mind had matured a bit more with ongoing learning.
  5. Even though I was to set up PyCharm development environment on Week-0, I didn't do so. I relied on Colab for far too long, and that was detrimental. By the midstream of the course, I visited the Exam Environment section of the Certification Handbook PDF and followed the Set up your environment to take the TensorFlow Developer Certificate Exam instruction to set up my PyCharm environment (along with other supported libraries including TensorFlow) and that worked out rather smoothly all the way to the end.
  6. The A Cloud Guru Video Training also has a special chapter showing step-by-step instruction on how to set up the development environment but it is important to follow the official link above to complete setup because TensorFlow and supporting libraries and PyCharm version are continually revised by the exam authority and external instructions may be outdated.
  7. Knowing all the datasets presented in the study material to build various ML model architectures to solve the problem at hand was the best part of the 10 weeks and proved highly relevant to build confidence for the exam.

TensorFlow Developer Certificate Exam Prep Video Course

For the purpose of the exam, I wanted to take at least one video training course to learn ML and TensorFlow in an orderly fashion. I found several options provided in the TFDevCert study plan under the Recommended Self Learning resources section. One stood out, but I didn't end up taking it. Instead, I went with the A Cloud Guru course because of my existing subscription. That turned out to be gold and helped me to better augment other study material while taking this course in parallel. A little more color on both courses.

  1. TensorFlow Developer Certificate Exam Prep, taught by Adam Vincent, offered by A Cloud Guru (ACG). Adam has done an excellent job in this training overall. I benefited from this in multiple ways.
  • The course in total takes about 12 hours to complete. I did one full pass by watching all the videos and the labs and then another pass, coding the examples and the lab exercises. Later, I watched bits and pieces to reinforce concepts as needed.
  • What I most appreciated in this course is the logical ordering of the chapters, very suitable for a beginner new to both ML and TF. It starts with the PyCharm dev environment setup instruction, teaches the basics of tensor, data loading/parsing/wrangling, NN, Keras and then covers each of the exam topics (CNN, NLP and Time Series) systematically. The Machine Learning world dumps everything on you at once which can be highly chaotic for a beginner. Thanks to this orderly course, I was able to maintain my sanity.
  • The most important lesson I learned here is how to write a full model program in Python on PyCharm IDE, both my first time. Not only I finished all the labs but also captured and practiced all his example codes in part and full and ended up with a good collection of functions to refer to and reuse elsewhere. This course does not provide the example on Notebooks.
  • It turned out that I had to look up my notes taken from this course on the basic ML concepts multiple times because of the way Adam Vincent has explained it. Concepts like Backpropagation, Loss Function, Optimizer, Activation and different model architectures (e.g. LSTM) etc. took a while for me to internalize and needed revisit.

2. DeepLearning.AI TensorFlow Developer Professional Certificate, taught by Laurence Moroney, offered by Coursera. I haven't taken this course personally, but as I've said before; he is a trusted authority in the field of ML and TF and I like his teaching style. I wasn't at a loss, however, for not taking this course because I did watch all his YouTube videos (details below) and it played a critical role in passing the exam. Except Time Series, he's covered everything free on YouTube, including assigning home works.

Machine Learning and Deep Learning Books

Deep Learning with Python, 2nd Edition, by François Chollet
Deep Learning with Python, 2nd Edition, by François Chollet

Deep Learning with Python, 2nd Edition, by François CholletGitHub — Written by the creator of Keras deep-learning library. If I were to pick one book to make my inroad to Machine Learning or learn TensorFlow from scratch, this'd be the first choice. It is an easy to read, digestible book and it also happens to match the certification scope really closely. The second big reason to choose this book can be attributed to the following quote to which a majority of us can relate to. Having come across that helped me to relax and finish the book sooner than I had anticipated.

… although this section deals entirely with linear algebra expressions, you won't find any mathematical notation here. I've found that mathematical concepts can be more readily mastered by programmers with no mathematical background if they're expressed as short Python snippets instead of mathematical equations. So we'll use NumPy and TensorFlow code throughout. (Page 38).

Hands-On Machine Learning Book

Deep Learning with Python, 2nd Edition, by François Chollet
Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow by Aurélien Géron

Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow by Aurélien GéronGitHub — Written by another former Googler, this is very famous and a great reference book. It takes a holistic approach to teaching theory as well as practical aspects of ML (like a college textbook). Also, as the title suggests, it is more than TF and Keras. Even though I enjoyed reading it and learned a great deal, I'd read François Chollet's book first and then graduate to Géron's book, instead of vice versa. What's worth pointing out in this book is its abundant reference to ML papers (e.g. CNN's origin, Ch 14, page 446) and the end of the chapter exercises. This is a gem and a long time keeper.

YouTube Series on Machine Learning (TensorFlow channel)

These video (series) links are logically embedded into the 10 weeks TFDevCert study plan and that's how I discovered them. Having gone through them, I can easily say these played a major role in building my foundation and just in general it taught me how to approach to solve a ML problem. These are my overall observations:

  1. Machine Learning Foundations Series by Laurence Moroney (10 videos)
  • I watched all of them multiple times, didn't skip any!
  • At first I didn't pay attention to the homework assignments LM would give in each video, but then I saw how seriously he was solving and walking us through his Colab notebooks. By the third video, I got into the swings and never missed a beat.
  • The SHOW MORE link underneath each video contains valuable nuggets for future reference. I bookmarked them all.

2. Natural Language Processing — NLP Zero to Hero by Laurence Moroney (6 videos)

  • These six videos were my primary learning sources to master NLP concepts in general, both for classifying and generating texts.
  • I really got a good hang of the sequence of steps needed to prepare data and build NLP model architectures. The concepts needs a little getting used to in the beginning and LM has nailed them.

3. Machine Learning Zero to Hero (Google I/Oཏ)

  • This is a fantastic watch. I enjoyed LM's commentary while solving the Rock-Paper-Scissor dataset in a virtual gathering!
  • It helped to reinforce the learnings from previous videos.

4. Machine Learning Crash Course by Developers.google.com

  • This is an amazing resource. I am sure I was pointed to somewhere in the TensorFlow site. It teaches fundamental concepts like nowhere else and below is the companion link to watch all the videos on YouTube. They go hand in hand.
  • Machine Learning Crash Course Exercises by Developers.google.com
  • The Crash Course first explains each concept in text form and then on video, although the text is more elaborate where I ended up spending more time. Learning rate is a good illustration of what I am trying to say.
  • Each video packs a lot of tough concepts in a short duration (may I say a little Googly!). I got the gist of what different folks were presenting, but I had to reinforce with further reading.
  • Exercises are the big highlight of this course. I benefited from doing them to reinforce my acquired knowledge on multiple occasions. Learning rate Playground Exercise demonstrates what I am trying to say nicely!

5. Learn TensorFlow and Deep Learning fundamentals with Python by Daniel Bourke (2 videos, 14 hours)

  • These two videos are featured in the first 2 weeks of the TFDevCert study plan, and watching them may seem overwhelming. However, as the videos progressed, soon I realized their importance (in addition to my very first introduction to Daniel Bourke, never heard of him before. It seems he lives on YouTube. Go figure!).
  • It's meant to teach those who are brand new to this field and unaware of the coding challenges involved, how to approach, discover resources and learn TensorFlow instead of being intimidated by it. I also got a crash course here on how to use Colab effectively instead of learning it from another source.
  • I also bookmarked and read through his main website called Zero to Mastery TensorFlow for Deep Learning Book (see GitHub). The site is logically organized.
  • Thanks to the well organized TOC underneath each YouTube video, I was able to do one pass quickly and then went back to refer to individual topics as needed. In that sense, these videos serve as a highly valuable future reference.

Tensorflow.org Tutorials (official documentation site)

TFDevCert study plan has done a good job of pointing out the valuable resources on TensorFlow.org site. I cannot emphasize more how important it is to rely on this site, the official guide and the source of truth for everything TensorFlow. Going through the guides, tutorials and recommendations from these pages taught me the right ML fundamentals, the best way to pick the right model architecture depending on the problem at hand and how to improve model performance following proper design principles. For example:

  • Simple does it. "A simpler model is always a better choice to begin with and it may even perform better than you think!". I learned this lesson on multiple occasions through some sage coaching and later by my own validation. It comes with experience to appreciate the amazing power of deep learning models. Adding a single layer with just enough neuron capacity to begin with, evaluating its performance and gradually adding more layers and capacity is a better art of model training and it pays off in the long run than jumping the gun early on with a bigger and complex model for no apparent gain or even worse this path may lead back to the square zero and start all over again. Overfit and underfit has done a good job explaining this important concept with the illustrations of Tiny, Small and Large models.
  • François Chollet's DL with Python, 2nd Edition, Chapter 5 (Fundamentals of machine learnng) is also an excellent read to extend on the above concepts and provides a comprehensive set of tools for the model training trade. This chapter summed up a lot of my readings on TensorFlow and other sites and provided a glimpse of how ML Experts think.
  • Focusing on the TensorFlow.org also helped me to come to this realization ultimately that there are a lot of outdated, randomly complex and hard to explain models floating around in the web. Relying on the source of truth is absolutely paramount for not only learning the right content but renewing it when revisions are made in this rapidly innovating and changing space. Text classification with an RNN and Time series forecasting are two quick examples that I didn't appreciate at the first reading on TensorFlow.org but realized their elegance and totality of the teaching upon reworking some examples I found on the web by applying the above "simple does it" paradigm and after an in-depth understanding of the concepts taught in these pages. As the saying goes, better late than never!

During my preparation, I had bookmarked the following links for easy access and taken special notes on how to use the Guide, Tutorial and API pages effectively (the three main pillars). They are worth sharing here because it took me a while to figure out and find my way around in spite of the site's intuitive design.

  1. On each Guide and Tutorial page (see example), there is an option to Run in Google Colab along with GitHub and Download links. I didn't do it in the beginning but later realized that clicking on the Colab link is a better way to navigate the TOC links, view the code snippets and even execute the code right in the notebook itself. It saved enormous time.
None

2. I also downloaded and saved the notebook in order to embellish with my own codes and later used them as reference, including during the exam.

3. On each API page (see example), I found the table with the Used in the guide and Used in the tutorials headers is thoughtfully done. That gave a golden opportunity to apply the above tricks and see how a particular API is used in a tutorial example.

None

4. Tutorials

5. All Datasets Catalog

  • I tried out as many datasets as possible on top of what's in the TFDevCert study plan. This is an amazing contribution by the TensorFlow team.

6. TensorFlow Guide

7. TensorFlow API

Coding on PyCharm and Google Colab

When I started my journey, I was new to both PyCharm and the Notebook environments. Because the exam takes place on PyCharm IDE, I installed and set it up, albeit a little late in the game, following the A Cloud Guru course instruction. However, Colab was the de facto practice ground at first. Soon I realized that spending a whole of time on Colab, though helpful, is not enhancing my learning and my brain is not retaining as much. The learning felt scattered. So I switched my primary coding activity to PyCharm, where I was able to write every modeling exercise in its full anatomy (main body, functions, copious comments, etc.). The momentum of the preparation picked up pace after that change. Colab still remained very much in use but primarily to practice code snippets, leverage its GPU computing power whenever necessary, especially while training complex CNN models. I also made heavy use of Colab to catalog all my model collections for easy reference.

Few more tricks of the trade in balancing activities on PyCharm and Colab and most importantly, practicing in an environment that's compatible with the latest Exam Environment (detail under Pre-Exam Organization).

  1. Once I was all set on PyCharm, I started all my modeling exercises on the IDE by writing a full program for each problem. ACG instructor Adam Vincent calls it a full model creation process. Following his style, I forced myself to write functions to read different data formats, wrangle and transform data to fit a model's expectations, plot helpful graphs, build, evaluate and compare different model architectures, etc. After mastering this art, I was able to write programs quickly and independently, retain the concepts in my mind and developed a good set of reusable utilities shared among different programs.
  2. The process of writing full programs on PyCharm also translated well into Colab. If there was a necessity to use Colab, I simply copied and pasted the full program into one Colab code cell. I followed this style in cataloging all my models (.ipynb files) for later reference.
  3. While practicing notebooks written by others including on TensorFlow.org site, I also collected commonly used code snippets, turned them into utility functions and logically organized them into different.py files easily importable inside PyCharm as well as Colab.
  4. One of several utility files was the History.py I got from the ACG course. This became part of all the modelling exercises I worked during the entirety of my preparation, including the exam. It takes the history object returned from the model training function (fit) and plots the training and validation loss and accuracy on a graph. This makes it really easy for a human eye to see the overfit and underfit scenarios, tune the hyperparameters to perform necessary adjustments and repeat the whole training cycle until attaining optimal result.
  5. Another realization that came after a bit of practicing different publicly available Colab notebooks is that not everything everybody is trying to teach, including on the TensorFlow.org site, needs to be learned in one go. For example, the notebook Natural Language Processing with TensorFlow by Daniel Bourke has 7 model architectures. I soon realized I probably should focus on 2 or 3 of them and revisit the rest at a later time.

Pre-Exam Organization

Exactly a week before the exam, it was time to get organized. Even though the exam is open book, going into the exam without a ready battle plan would prove fatal was my firm conviction. I was intimately familiar with how fast a five-hour window flies while solving ML problems. My strategy going into the exam was not only to keep my own prior work handy for reference but also practicing in advance on how to pull up help on the web if situation demanded. It paid off!

  1. I revisited the Exam Environment section of the Certification Handbook PDF and read the Set up your environment to take the TensorFlow Developer Certificate Exam one more time to make sure my computer was ready for the exam even though I had set up and been working in that environment all along.
  2. Special note Because TensorFlow and other supporting libraries and PyCharm version are continually revised, it is important to consult the official guide above throughout the preparation. The whole setup process repeats one more time right before the exam.
  3. Because the exam installs a plugin into the PyCharm IDE, once it is activated, there is no way to refer to any other project codes on the same IDE without risking premature termination of the exam. So I collected all my model problems and anything else I had worked on from a variety of sources, including the TFDevCert study plan from my current PyCharm environment. Few nights before the exam, I checked them all into my private GitHub repository.
  4. To ease the look-up process, I categorized all the solutions I had worked out along with the specific dataset names into FIVE different groups aligned with the exam skills checklist (NN with TF, DNN, CNN, NLP and Time Series).
  5. In addition to GitHub repository, I also arranged all the solutions as Colab .ipynb files on my Google Drive for ready access as well as a backup to GitHub.
  6. In the extreme case of having to go outside of my own solutions, I had the "searching in the wild" options in the following order, based on their merit I had tested beforehand.

Fun Along the Way

Learning must be fun! Even in the midst of studying for an exam, be it Machine Learning, TensorFlow, Keras or anything else. Because of my being new to the field, virtually everything I was learning was fun and exciting. Confusion Matrix is still confusing, but that's for another time. Findings like the Godfathers Of AI (I wonder what offer they made that we couldn't refuse!), Friends don't let friends use minibatches larger than 32 were starting to make the field even more endearing. Here are a few more for a chuckle!

  1. Dropout inspired by Bank Teller Conspiracy Theory? Dropout is one of the most effective and most commonly used regularization techniques for neural networks; it was developed by Geoff Hinton and his students at the University of Toronto. This technique may seem strange and arbitrary. Why would this help reduce overfitting? Hinton says he was inspired by, among other things, a fraud-prevention mechanism used by banks. In his own words, "I went to my bank. The tellers kept changing and I asked one of them why. He said he didn't know but they got moved around a lot. I figured it must be because it would require cooperation between employees to successfully defraud the bank. This made me realize that randomly removing a different subset of neurons on each example would prevent conspiracies and thus reduce overfitting." The core idea is that introducing noise in the output values of a layer can break up happenstance patterns that aren't significant (what Hinton refers to as conspiracies), which the model will start memorizing if no noise is present. (Source — DL with Python, 2nd Edition, by François Chollet, Page 150)
  2. Spherical Cows? The joke goes like this- Milk production at a dairy farm was low, so the farmer wrote to the local university, asking for help from academia. A multidisciplinary team of professors was assembled, headed by a theoretical physicist, and two weeks of intensive on-site investigation took place. The scholars then returned to the university, notebooks crammed with data, where the task of writing the report was left to the team leader. Shortly thereafter, the physicist returned to the farm, saying to the farmer, "I have the solution, but it works only in the case of spherical cows in a vacuum." Not to deflate your enthusiasm in solving the time series model to predict GOOG stock price using LSTM layers, but hold off on that big market order just yet. ACG's Adam Vincent warns that the models we're building are perfectly Spherical Cows. Ouch!
  3. Motto #1 If in doubt, run the code! On a serious note, this shouldn't come as a surprise, but I needed an extra dose of reminder of this "programmer's mantra". I was glad to hear Daniel Bourke chanting it throughout in his videos and courses. Go Colab!

Exam Day!

One of the unique aspects of the TensorFlow Dev certification is that the candidate can choose the day and time of the exam within six months from the time of registration.

So one TGIF morning, after walking my dog, I took the plunge. After following the given instruction, primarily getting the PyCharm IDE ready to begin the exam, I found myself in a mini crisis. In my excitement and rush to start solving the problems, I found that TensorFlow 2.7 was not properly installed in my environment, but by then the clock had started ticking. It turned out that my Windows OS setting didn't allow long filenames (see fix below). After fixing that and reinstalling TensorFlow 2.7, I was back in business and the day obviously ended well!

The registry key Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\LongPathsEnabled (Type: REG_DWORD) must exist and be set to 1.

During the exam, I did end up applying everything I've mentioned so far. Here is the final recap, bringing it all together at the end.

  1. Following my past habit, if I had to use Colab, I copied the entire program from PyCharm, while maintaining its full integrity, into one code cell in Colab, and ran it there. PyCharm always remained the source of truth. Time is the most precious resource during this exam and I didn't want to waste any time due to unnecessary confusion.
  2. I kept one GPU enabled Colab session reserved for the CNN model architecture and ended up using it because my Windows machine was running at its best but much slower than expected.
  3. My Pre-Exam organization worked out exactly as per strategy.
  4. Each exam problem comes with a set of instructions, and I made sure I was paying attention to every detail in order to not miss out any key information and make an inadvertent mistake. I made a duplicate copy of the original problem before starting my work on PyCharm IDE.
  5. The 5-hours exam time for most purpose is sufficient to solve all five problems, including time spent on referencing and researching. Sticking with my personal programming style, I was careful to work systematically in solving each problem step by step and applying all the knowledge from prior preparation. I did not encounter any surprises outside of the TensorFlow.org guide and tutorials, which basically means if any problem felt challenging; I was able to relate it to its origin on one of the Tensorflow.org Tutorials links I've shared above.

Conclusion

If you've made it thus far without jumping any of the sections above, Thank you!

I made it thus far because I had the privilege of getting the right coaching, help, support and encouragement to prepare and pass this exam. I am fully cognizant of the fact that the result might have gone either way, but thankfully it ended positively and gave me the opportunity to share my experience with you, a true pleasure!

My sincere THANK YOU to each and everyone who directly or indirectly contributed to my success, not only in passing this exam but most importantly for opening the door to a new, exciting and challenging field of work I was pining to enter.

Hope you feel inspired and begin your journey to Machine Learning, TensorFlow and the world of Artificial Intelligence.

Wish you all the very best!!!

PS:

  • If I've helped you, I'd appreciate a 👏 !
  • Stay tuned for more Machine Learning and Cloud Computing posts.
  • Most importantly, please leave your question and/or feedback to enhance this and future contributions, thank you!!