An interview with Ellick Chan, Head of University Relations and Research — Intel AI Academy

Sayak Paul
7 min readFeb 9, 2020

--

I am pleased to welcome Ellick Chan of Intel today. Ellick leads efforts at Intel to bring Intel AI technology and curriculum to top universities. Ellick is an Adjunct Professor at Northwestern University as well where he teaches the Deep Learning: Theory and Applications course. Ellick has been a Postdoc at Stanford University where he worked on medical record security on the SHARPS project, a consultant working on deep learning projects, and a systems programmer bridging the interface between operating systems and hardware. You can learn more about Ellick here.

I would like to wholeheartedly thank Ellick for taking the time to do this interview. I hope this interview serves a purpose towards the betterment of data science and machine learning communities in general :)

An interview with Ellick Chan, Head of University Relations and Research — Intel AI Academy

Sayak: Hi Ellick! Thank you for doing this interview. It’s a pleasure to have you here today.

Ellick: Glad to be here.

Sayak: Maybe you could start by introducing yourself — what is your current job and what are your responsibilities over there?

Ellick: I currently lead several university engagements for our AI and accelerator efforts in developer relations. We help students and professors make the best use of Intel hardware for their machine learning and computing needs.

Sayak: Interesting! I am curious to know how you become interested in machine learning?

Ellick: My first project in machine learning started many years ago ~2008 in graduate school when we were working with Intel on trying to understand how CPU performance counters could be used to help optimize workloads. We used decision trees to examine common performance bottlenecks with metrics such as cache miss rates, memory accesses, and stalls.

From this project, we realized that machine learning could help solve very complex problems in the real world. Prior to this, much of the machine learning community was doing data mining for marketing purposes and I was excited to see a systems/technical application of ML that could help improve program performance.

See: https://www.usenix.org/legacy/events/hotpar11/tech/final_files/Yoo.pdf [I was involved in the early experiments, but graduated before this paper was written]

Sayak: That is an amazing application of machine learning. Understanding the performance metrics of a computer system using machine learning definitely excites me. When you were starting what kind of challenges did you face? How did you overcome them?

Ellick: When I first started, there weren’t too many user-friendly tools for machine learning. Many of the tools were designed for data analysis and traditional methods. However, I think the true challenge in machine learning is usually related to problem formulation, understanding and data interpretation rather than the algorithms themselves. One of the things I found most helpful is to be able to iterate quickly, try many things and work with domain experts to solve a problem.

Going back to the Intel performance-counter example, initially, we tried to do as much recording and instrumentation as possible. That led to collecting large amounts of data. We built simple models and reviewed them with our Intel mentors. Via those meetings, we confirmed what the data was saying and homed in deeper into the architectural aspects of the chip. I will say that there were many red herrings along the way and getting the support of domain experts was crucial to helping us refine and iterate our models.

Sayak: Definitely! I remember my initial days where I did not have any domain expert to work with and I can certainly feel how problematic that was. What are some of the research projects you found to be challenging in your career?

Ellick: All research topics come with different sets of challenges. Some are technical where a coding breakthrough is needed, and some are organizational where collaboration with another group is needed to move forward. Early on, I worked on very complex operating systems with many layers of software. One paper involved resurrecting an operating system across a reboot (see Bootjacker). That project was difficult due to the technical depth and layering of software/hardware systems.

Later, I came to work on machine learning and medical data. That project was difficult because doctors oftentimes disagree, and that project was difficult due to ambiguity in the ground truth. Also, the data was very messy and had lots of missing values as well as errors in the data itself.

Finally, I worked on deep learning and multi-channel radar systems in a consulting job. That project was challenging due to the use of new technology, deep learning and GPUs, and due to noisy data from the multi-channel radar systems. On top of that, we had to devise new algorithms to fuse together sensory input from multiple sensors, much like how self-driving cars learn how to perceive the world from multiple viewpoints. This project brought together new technology, ambiguity in the data, and integration of multiple systems of sensors.

If I were to summarize what makes a project difficult, I’ll say that it’s largely due to complexity of the project, lack of domain expertise, ambiguity in the data, integration of multiple complex systems and sometimes making a mistake in a fundamental layer of the system that is assumed to be correct. Much of research is trying out hypotheses, discovering that the experiment doesn’t match our model of the world, and debugging why that’s the case.

Sayak: Thank you very much for detailing the projects in this way, Ellick. In the coming years which areas of machine learning do you think most of the research work would focus on?

Ellick: There are several thrusts that I think will take off, but first let’s talk about what’s driving them. Firstly, we are in an AI hardware renaissance with new dedicated chips coming out and these new chips will enable the training of models much larger than what we’ve traditionally been able to do. This helps build more accurate models that can take on tougher workloads. Secondly, dedicated edge chips in IoT devices are placing intelligence closer to the data collection and into the real-world with low power consumption and training ability in situ. Finally, practical industrial concerns such as fairness, cost of data labeling, and integration with existing industrial/business processes are driving the development of algorithms/models that better suit industry.

As a result, I think that future systems will tend to be more real-time, intelligent, secure and explainable. We’ll see large clusters of specialized AI processing units that enable the convergence of AI and high-performance computing applications such as designing aerodynamic shapes. We’ll see intelligence in edge devices, with the use of things like federated learning to improve security and privacy, and these systems will be more explainable. Along with this, better simulation of the world via HPC techniques will allow larger-scale reinforcement learning in areas such as self-driving vehicles.

Sayak: I am absolutely in agreement with those points on HPC and reinforcement learning. These fields like machine learning are rapidly evolving. How do you manage to keep track of the latest relevant happenings?

Ellick: Luckily, Intel works with many academic and industrial partners. On the academic side, we are partnered with many of the world’s top institutions and research labs working together to solve tough problems. On the industrial side, we have a program called Intel AI Builder that works closely with top AI companies and startups out of some of the top research labs. Naturally, the best ideas tend to get selected in this vibrant network.

Sayak: That is an unusual one. Being a practitioner, one thing that I often find myself struggling with is learning a new concept. Would you like to share how do you approach that process?

Ellick: My favorite go-to source to learn new concepts is not reading papers or taking a class. I simply start by going to Youtube and finding a video on the subject. Famous physicist Richard Feynman believed that if an idea cannot be explained to a freshman class, then you don’t really understand it. Youtube has many clear videos that deconstruct complex topics in an easy to learn format.

Sayak: My go-to resource is primarily blogs, so I can definitely relate here. Any plans to author a book anytime soon?

Ellick: I currently do not have any plans on writing a book, but we are partnered with top academic institutions to create a compelling academic curriculum for AI and HPC. For instance, we worked with Berkeley’s RISE lab to create a course on distributed reinforcement learning. You can find a list of our courses here: https://software.intel.com/ai/courses.

Sayak: Thanks for passing that along, Ellick. Any advice for the beginners?

Ellick: The best advice I’ll share is that I too was once a beginner, and everyone goes through that phase. The best way to gain expertise is by doing and learning from others, either directly or through reading the work of others in the field. There is some truth to the adage that fear of failure kills more ideas than failure ever did. All great innovators struggle many times before finally hitting the jackpot. I’d encourage you to pursue what you’re passionate about, whether it be AI or any other field. Seek out mentors and learn from each other.

Sayak: Thank you so much, Ellick, for doing this interview and for sharing your valuable insights. I hope they will be immensely helpful for the community.

Ellick: You’re welcome.

I hope you enjoyed reading this interview. Watch out this space for the next one and I hope to see you soon. This is where you can find all the interviews done so far.

If you want to know more about me, check out my website.

--

--