An interview with Karl Fezer, AI Ecosystem Evangelist at Arm
This is a part of the interview series I have started on data science and machine learning. These interviews are from the people that have inspired me, that have taught me (read are teaching me) about these beautiful subjects. The purpose of doing this is to mainly get insights about the real-world project experiences, perspectives on learning new things, some fun facts and thereby enriching the communities in the process.
This is where you can find all the interviews done so far.
Today, I have Karl Fezer with me. Karl focuses on AI Developer Relations and finds that the best way to build is to enable others to have the tools to create what brings them joy. While his background and hobbies include robotics and Human-Machine interaction, his career focus on Developer Experience. He is an extreme believer in open source and sees it as not only a better way to share knowledge but the right way to model a business. He is currently with Arm serving as an AI Ecosystem Evangelist. Prior to joining Arm, he worked at Intel as an AI Developer Community Manager. I had the opportunity to interact with Karl on a number of occasions and he was kind enough to provide his feedback on the articles I wrote for the Intel Software Innovator Medium channel. You can learn more about Karl from here.
I would like to wholeheartedly thank Karl for taking the time to do this interview. I hope this interview serves a purpose towards the betterment of data science and machine learning communities in general :)
Arm architects the pervasive intelligence. Arm-based chips and device architectures orchestrate the performance of the technology.
An interview with Karl Fezer, AI Ecosystem Evangelist at Arm
Sayak: Hi Karl! Thank you for doing this interview. It’s a pleasure to have you here today.
Karl: Thanks for having me, Sayak. I’m always happy to talk about ML.
Sayak: Maybe you could start by introducing yourself — what is your current job and what are your responsibilities over there?
Karl: Well, as I just started on August 12th, there is a lot that I am still learning, but I am the AI Ecosystem Evangelist at Arm. My main responsibilities (will be) focusing on AI developer outreach around Arm’s IP which includes both hardware (CPU, GPU, NPU) and software such as ArmNN and CMSIS-NN. The rest of my efforts will involve working with software partners like Google on products like TensorFlowLite for microcontrollers and working with our AI Ecosystem partners who are developing AI solutions based on Arm. Finally, and most importantly, delivering feedback from all of those developers to improve our AI developer experience overall.
Sayak: Great to know about your new ventures at Arm, Karl. I am very much interested to know how did you become interested in machine learning?
Karl: I would say it was originally an interest in robotics and human cognition. Both of those I started studying in small ways in high school, mostly reading philosophy texts like “The Emperor’s New Mind” and by experimenting with circuits and programming. I also became a huge follower of Raymond Kurzweil, Rodney Brooks, and (a little later) Sebastian Thrun. After that, I started school focusing on Electrical Engineering, which didn’t last long, and moved on to Philosophy and Psychology, pursuing an understanding of human consciousness. Eventually, in college, I realized that Machine Learning bridged all the things I was interested in and I dove into it fully.
Sayak: Nice to about this fact — how your high school interest went onto something that shaped your current interests. When you were starting what kind of challenges did you face? How did you overcome them?
Karl: That’s a good question. I would say that even on projects where I was the only one I was aware of working on that problem, it’s normally about incremental changes; as in, the part I was working on was adding onto something else. A good example of this was when I was adding RFID capabilities to ROS and I found a GitHub repo of a developer who had a very different project but was using an RFID reader to localize objects. It wasn’t the same hardware, and I had never written a driver before, so I reached out and asked. He was able to narrow down my problem to a few variables which pointed me in the right direction. Then I could get to actually working on my thesis project. I had a similar situation years later when working on an FPV racer I was building and tweaking Betaflight. I would always say ask for help, but remember what got you there and always be willing to pay that forward. We all build on the shoulders of giants, as it were. In some cases it’s really not giants; just a lot of normal-sized people holding each other up, kind of like kids in a trenchcoat sneaking into an R-rated movie.
Sayak: Very true. Asking for help is no way negating. What were some of the capstone projects you did during your formative years?
Karl: Formative years probably refers to grad school in my case, and I worked on a few fun projects. This was before any of the frameworks, so one was training hand-built Neural Networks to predict the weather based on ~20 years worth of weather-station data. Another fun one that was probably my favorite was working on a team to develop a fuzzy-logic flight controller for a UAV. The biggest was probably my thesis where I used genetic algorithms to develop mapping behaviors for an indoor robot, using RFID tags as static landmarks. Looking back it was effectively a GA-version of Reinforcement Learning. I was also reading lot of behaviorism at the time, both human and robot-related.
Sayak: Ah! Those must have been some serious fun — getting lost in the world of machine learning, soft computing and reinforcement learning! These fields data science and machine learning are rapidly evolving. How do you manage to keep track of the latest relevant happenings?
Karl: That’s a hard question. It’s getting to the point where it’s almost impossible. There are a lot of ways to curate your various news feeds to help, but ML is the perfect example of how we (and soon programs) are generating more and more content all the time, too much for our little meat-brains to keep up with. Instead, I try to focus more on what tools are being developed that aid in developing ML algorithms. Effectively, learn tools and ways to develop, not algorithms. There are a lot of websites that help, everything from PyImageSearch with a practical spin to Distill that helps me understand topics that might not be in my wheelhouse but I could still learn some interesting methodologies from. Also, talking to partners and developers at events is always helpful. It’s nice to have formed a good peer group of people around the world who are working on different topics that I can ask. I’m very much a generalist.
Sayak: Nice to know that you follow PyImageSearch. I am also a big fan of Distill and Christopher Olah’s writings. Being a practitioner, one thing that I often find myself struggling with is learning a new concept. Would you like to share how do you approach that process?
Karl: Also a good question. A lot of new concepts can be hard to understand, depending on how abstract they are. I tend to use a lot of metaphor to try to understand advanced concepts. It helps that a lot of ML is biomimicry, so there should be good practical examples. I forget what it’s called, but there’s a method of Reinforcement Learning that gives minor rewards for agents that fail to make a full step, but move in the right direction. The way it made sense to me is if you were teaching to someone to build a boat and they failed, but managed to build a paddle, you’d still give them some encouragement. It really doesn’t matter if it’s not related or if it doesn’t make sense to anyone else; it’s whatever it takes to help you understand it. We all learn differently.
Sayak: Real-world analogies always work no matter how complex and how counter-intuitive. I recollect learning why do we multiply weights to the inputs given to a neural network using a superb analogy given in the Grokking Deep Learning book by Andrew Trask. Any advice for the beginners?
Karl: I would say play to your strengths and passions. Just like everything, there’s going to be moments where it feels like your grinding to learn something. Some people, me included, find a lot of the underlying math to be difficult to understand, but if you start interacting with it in a real way and building a project first hand, you tend to understand things better, or at least I do. It’s why I was drawn to robotics in the beginning; you can actually observe and see what your code is doing. Plus, robots are cool. If you start losing motivation, then stop and find something that draws you in. Find friends, go to a meetup, join a club, or find a makerspace if you want to interact with people in person who have the same interests. There are also thousands of online communities at this point working in ML who are equally good at sharing what they know. Find a cool open source project on GitHub, learn it, and contribute to it. Also, do some reading on Bayesian Probability.
Sayak: I get that grinding feeling too whenever I start reading a paper (chuckles). Thank you so much, Karl, for doing this interview and for sharing your valuable insights. I hope they will be immensely helpful for the community.
Karl: Thanks for the opportunity! I also hope it will be helpful.
Summary
It was amazing to hear about the early projects that Karl developed in his grad school. I will have to say Karl is one true maker by heart. He has first-hand experience with a lot of different domains starting from robotics to machine learning to reinforcement learning which is superb. As Karl mentioned the importance of following your passion, it’s okay to have the grinding feeling in the process, but you need to push and push as further as you can. I think this makes sense for everyone in the community no matter how small, no matter how big :)
I hope you enjoyed reading this interview. Watch out this space for the next one and I hope to see you soon.
If you want to know more about me, check out my website.