An interview with Niki Parmar, Senior Research Scientist at Google Brain

Sayak Paul
6 min readMay 29, 2020

I am pleased to have Niki Parmar for today’s interview. Niki is a Senior Research Scientist at Google Brain where she is involved in research related to self-attention and extending that to different applications in both language and vision. Her interest lies in generative models, 2D, and 3D vision and self-supervised learning.

Niki has co-authored a number of impactful research papers in the domain including the seminal paper on Transformers — Attention Is All You Need. To know more about her research and her interests you can follow her on Google Scholar and LinkedIn.

An interview with Niki Parmar, Senior Research Scientist at Google Brain

Sayak: Hi Niki! Thank you for doing this interview. It’s a pleasure to have you here today.

Niki: Hi Sayak, Thank you for having me.

Sayak: Maybe you could start by introducing yourself — what is your current job and what are your responsibilities over there?

Niki: Sure. I’m currently a Research Scientist in the Google Brain team working on research problems in Deep Learning. I joined Google around 4.5 years ago, working on applied research for problems like text similarity and question answering for Search. My focus is on advancing research that can help improve quality, efficiency, or understanding of our existing systems. In collaboration with my peers, we try to tackle research questions on established academic benchmarks that can also lead to impacting products.

My journey in research has involved understanding how self-attention and other inductive biases can be used to improve our models across various tasks like Machine Translation, Language Modeling, and more recently Perception. The current research question I’m involved in is learning meaningful representations using self-supervised learning.

Sayak: That’s awesome. Transitioning from applied research to fundamental research must have been quite a journey. You have worn many hats in your career, be it query optimization, be it big data, distributed computing. How did you become interested in Machine Learning?

Niki: My first interest in Machine Learning developed during my undergrad as I took the first MOOCs by Andrew Ng and Peter Norvig on ML and AI. I was curious about the combined power of data, pattern matching, and optimization. When I joined USC, I was lucky to be part of a computational social science lab led by Prof. Morteza Dehghani, where I explored social science questions using ML and big data. At Google, I got the opportunity to learn and work on end-to-end Deep Learning systems that were trying to create alternative ways of solving NLP problems using the power of transferable embeddings, optimizing directly for the end task, and learning in weakly supervised settings. As I progressed through these places, deep learning models proved to be powerful tools for various problems. That inspired me to move to pure research where I could learn and contribute to machine learning.

Sayak: That was a beautiful journey so far including so many meaningful stuff. Being a Machine Learning Practitioner myself, I genuinely love how Machine Learning shows its magic of representation learning when applied to the right use-cases. I am curious to know when you were starting what kind of challenges did you face? How did you overcome them?

Niki: In the beginning, I remember being constantly overwhelmed with the amount of information and research that was happening around me. I think focusing on a specific problem with peers can help you navigate the space and ask the right questions. Having close mentors early on helps in making connections with existing research and refine your thinking which guides you in the right direction.

Sayak: I absolutely concur with the philosophy of focusing on narrow problems and pursuing them uncompromisingly. You have been a part of groundbreaking research projects. I am sure there’s a specific way you or rather your team conducts research. Would you like to shed some lights along those lines?

Niki: I believe one should always start with forming the research question/goal and an initial hypothesis. In collaborative settings, it’s useful to brainstorm closely and refine solutions based on incoming results. In our project, we did this every day and discussed at great lengths the new results we got each day and then went on to try new things. Through the course of the project, paying attention to details is important — every detail, every question, every choice seems relevant. It’s not always possible to go over each one or try all combinations but can overall improve the results.

Also, research questions continue past papers, and extending them to make it better or applying it to new problems helps ground understanding and gain expertise.

Sayak: Quite interesting. Asking the right questions does pay off well! These fields like machine learning are rapidly evolving. How do you manage to keep track of the latest relevant happenings?

Niki: Definitely, there is a lot of great work happening with new trends and techniques constantly emerging. I tend to rely on people and groups around me for paper recommendations. Reading groups, Twitter, and Reddit are helpful resources too. Sometimes, when I’m interested in a particular topic, I’ll try to find papers over the years and go into the depth of what’s the latest. Generally, it’s hard to read every paper for any research topic and that can also confine your own thinking, so I tend to avoid that.

I also apply a filtering rule of seeing what papers from the past few years are still being used.

Sayak: I am in cent percent agreement with the point you mentioned on going through all the paper for a particular topic. Given the latest directions in machine learning, which areas do you think would continue to grow in the near future?

Niki: This has been mentioned multiple times by others, but representation learning from unlabeled data is still challenging and has a lot of potential to grow and create impact. We have just begun to scratch the surface of that and personally I’m interested to see how that field evolves. Similarly, less common modalities like 3D vision or videos present interesting research challenges of compute and scaling existing computational ops, which might get more attention in the future. One area I’m really excited about is the application of ML in biology for proteins and molecules.

Sayak: Definitely! As someone who’s currently exploring the field of self-supervised learning can only appreciate the potential of representation learning from unlabeled data. Being a practitioner, one thing that I often find myself struggling with is learning a new concept. Would you like to share how do you approach that process?

Niki: I, generally, try to read through the relevant papers of that concept and try to compile questions using connections to my research. This generally works for me by implementing or reproducing a particular paper and making changes to integrate my existing research. This also becomes a good starting point to discuss and get feedback from an expert in that area.

Sayak: I agree. Referring to a few resources to get the balls rolling is generally less daunting and often leads to more productivity in general. Any advice for the beginners?

Niki: Choosing a problem is one of the hardest parts, which to be honest, I still struggle with sometimes. Being able to get feedback or collaborate with peers really helps in all aspects. The other, which is echoed several times and is extremely important, is to ask a lot of questions — small or big. Useful brainstorming and making interesting connections happen by doing that.

I’ve also found that staying consistent on a problem and not giving up early forces one to look in multiple directions for solutions that can help to find new insights.

Sayak: Thank you so much, Niki, for doing this interview and for sharing your valuable insights. I hope they will be immensely helpful for the community.

Niki: Thank you for having me, it was a pleasure talking to you.

I hope you enjoyed reading this interview. Watch out this space for the next one and I hope to see you soon. This is where you can find all the interviews done so far.

If you want to know more about me, check out my website.

--

--