An interview with Leslie Smith, Senior Research Scientist at U.S. Naval Research Laboratory

Sayak Paul
8 min readNov 20, 2019

Our interviewee today is Leslie Smith. Leslie is currently a Senior Research Scientist at the U.S. Naval Research Laboratory in the area of Applied AI Research. As many of you may know Leslie introduced a good number of novel techniques like Cyclical Learning Rate, 1cycle Policy, Super Convergence to the deep learning community. They have indeed enabled us to train neural nets much faster. In the past, Leslie has also worked on Reinforcement Learning, Maritime Surveillance, Computer Vision and so on.

A few months ago Jeremy Howard of fast.ai interviewed Leslie and that interview can be found here. You can know more about Leslie from here.

I would like to wholeheartedly thank Leslie for taking the time to do this interview. I hope this interview serves a purpose towards the betterment of data science and machine learning communities in general :)

Source

An interview with Leslie Smith, U.S. Naval Research Laboratory

Sayak: Hi Leslie! Thank you for doing this interview. It’s a pleasure to have you here today.

Leslie: Thank you for inviting me. I feel honored that you believe I have a bit of wisdom to share with the community.

Sayak: That is so humble of you, Leslie. Maybe you could start by introducing yourself enlightening us about your current research at U.S. Naval Research Laboratory.

Leslie: I am very fortunate that I really like my work. Deep learning is a fascinating field and I enjoy discovering clever ideas. Also, I try hard to have my own clever ideas. As you probably know, the field is so hot that it is a challenge to go from idea to published paper before anyone else. My goal is to produce research that has a meaningful impact on the deep learning community and I am interested in discovering ideas that give us a deeper understanding of how and why deep networks work.

The topics that I am currently focusing on include reducing the amount of labeled data needed for training neural networks, making networks more robust to image degradations, and the use of deep reinforcement learning as an intelligent adversary in training service members with strategic combat simulations.

I am working on lots of ideas, which is part of the fun. I feel as though I am creative and I can imagine more ideas than I have time to work on. I can’t work on all of these in a single day so each day I pick one topic to focus on.

Sayak: The ideas are damn exciting, Leslie. Thank you for sharing them! How did you become interested in machine learning?

Leslie: For the first several years that I worked at the Naval Research Laboratory, I was working in computer vision and related areas. I took notice that neural networks won the 2012 ImageNet Challenge by a wide margin. Also, I read Google’s “Cats paper”, which is called “Building high-level features using large scale unsupervised learning” and I was intrigued that neural networks learned general cat features without any human guidance. I wondered what else networks can learn.

In 2013 I started trying a few things and found I was fascinated by the field. Over the next year, I shifted my focus to deep learning and I think most all the other researchers in computer vision did the same in the following years.

Sayak: That is so very true. Also, the paper you mentioned is a great one! When you were starting in the field what kind of challenges did you face? How did you overcome them?

Leslie: I worked hard. I joke by saying that it is hard to get everything done in a 40 hour day but I try! Since I find the field fascinating, it makes it easier to spend most of my time learning and keeping up on the field. And the more I learn, the more I realize how much more there is to learn, so in some sense, I am still a beginner. I was recently asked what hobbies I have and my immediate answer was my research. It is what I like to do.

Sayak: I can definitely feel that, Leslie! I absolutely love everything about my day job and I consider that as my hobby. I am very much curious to know about your research methodology. Would you like to shed some light on that?

Leslie: Read, experiment, and think constantly. Read to be aware of what everyone else has already thought of, experiment to gain an intuition of how deep learning works, and think of new ideas or insights. Deep learning is considered a black box so one of my goals is to transform observation into a deep understanding of why deep learning works.

I read constantly because reading a paper often leads to ideas. In addition, it is important to me whether the authors of a paper I am reading make their code available. If so, I’ll download it and run it to replicate their experiments. Then I can quickly try out my own idea to see if it makes sense. Quick prototyping is important to me here. Let me emphasize that I believe in trying lots of small things as quickly as possible and learn from each experiment. Does it follow my intuition or not, and if not, why?

There are other factors that I consider when deciding to pursue an idea, such as if I think the idea is important enough to have an impact on the deep learning research and my confidence in how likely the idea is to work.

Sayak: So many interesting perspectives. This is going to be extremely helpful. What was the motivation behind 1cycle policy? It’s such a niche idea.

Leslie: My original motivation for creating the cyclical learning rates was to simplify finding an optimal learning rate. The only way to find the optimal initial learning rate was a grid search or possibly a random search, which seemed to me to be inefficient. My first thought was to determine if setting the learning rate to a range that was close to the optimal rate would work as well as using the optimal learning rate and from this, I invented the cyclical learning rate to test this idea. All my experiments obtained optimal performance provided the learning rate varied within reasonable bounds.

With the cyclical learning rate model in hand, I quickly realized that I could easily estimate good bounds around the optimal learning rate by performing my LR range test, which is also called LR_finder in the Fast AI library. The idea is to make a short training run, letting the learning rate increase until the training starts to diverge. This tells me the maximum possible learning rate, which gives a good estimate of the reasonable minimum and optimal learning rates.

The idea behind the 1cycle policy was different. In the 1cycle policy, the learning rate uses just 1 cycle and not multiple cycles as in cyclical learning rates. I was finding that the learning rate could become very large when using ResNets with batch normalization and large batch sizes. Here I could use a learning rate schedule that started small, increased to a very large learning rate, then down to a very small learning rate. I was focused on finding a good learning rate schedule while a group at Facebook was independently focused on fast training with very large batch sizes, but our results were equivalent. Their paper on training ImageNet in 1 hour has similarities to my Super-convergence paper where I introduced the 1cycle policy.

Sayak: That was very comprehensive, Leslie. Thank you! These fields like machine learning are rapidly evolving. How do you manage to keep track of the latest relevant happenings?

Leslie: I spend many, many hours every week keeping up to date but that is just me. Daily I look through all the new papers on arXiv.org to find ones that might be relevant and makes notes about the ones that interest me. This has become a ritual for me. The ones that look relevant, I read the abstract and skim the paper. If it catches my interest, I print it. I spend most of my weekend reading these papers. This takes hours but I stay up to date and reading is a great source of intuition and new ideas.

Sayak: Being a practitioner, one thing that I often find myself struggling with is learning a new concept. Would you like to share how do you approach that process?

Leslie: I can relate to what you mean. Sometimes it just takes repetition before a new concept makes sense. I’ll reread an important paper two or three times until I understand it. Also, I consider it normal if I don’t understand it the first time and tell myself that it will make more sense the next time I read it. Also, true understanding does not happen for quite a while.

In research, you must be comfortable with failing because it is an integral part of the field. You fail to understand new concepts at first. You experiment with a new idea and it fails. You submit papers to conferences and they are rejected. The key is to learn all you can from each failure. Make as many small bets as possible, learn from them, and go on to the next thing. In time, failure is just par for the course.

Sayak: That is very comforting, Leslie. Any advice for the beginners that are starting their journey in deep learning research?

Leslie: Beginnings are always the hardest part. As a scientist, I view life as a series of experiments. Make a deal with yourself to try it for a time that is long enough to know if it is right for you. If it is not, stop and go onto something else. Knowing it is an experiment and not a commitment makes it a lot easier to try things. Life is a series of experiments.

Sayak: Thank you so much, Leslie, for doing this interview and for sharing your valuable insights. I hope they will be immensely helpful for the community.

Leslie: I am flattered that you invited me and feel I have something worthwhile to pass on to other researchers.

Summary

I have always been inspired by Leslie’s ideas, his papers and the kind of humility he possesses. Throughout this interview, he shared his lifelong learnings, his research experience, and what motivates him to pursue those interesting research ideas. Leslie is a veteran scientist and he loves his work very much. This indeed enables him to push the limits as much as he can. He doesn't fear failures and he learns from them instead. I think this is the fundamental fuel behind working with a steady mind.

I hope you enjoyed reading this interview. Watch out this space for the next one and I hope to see you soon. This is where you can find all the interviews done so far.

If you want to know more about me, check out my website.

--

--