An interview with Patrick Hall, Principal Scientist at bnh.ai and Advisor to H2O.ai

Sayak Paul
9 min readApr 16, 2020

--

We have Patrick Hall for today’s interview. Patrick has worn many hats — be it as a maker, as a product director or as an Adjunct Professor. Currently, he is wearing three such hats. He is serving bnh.ai (a law firm which he co-founded) as a Principal Scientist, he is with H2O.ai serving as an advisor for their responsible AI efforts, and still teaching at George Washington University.

Patrick is one of the most prominent figures out there when it comes to Machine Learning Interpretability (MLI) efforts. He has authored workshop and journal papers on responsible AI, an e-book on related subjects, and has given a number of interesting presentations as well. A lot of his efforts for driving MLI can be found here on his LinkedIn profile. Additionally, he maintains several GitHub repositories to share resources for a number of different topics related to MLI. (They’re all open-source repositories and he happily welcomes PRs there.)

An interview with Patrick Hall, Principal Scientist at bnh.ai, Advisor to H2O.ai

Sayak: Hi Patrick! Thank you for doing this interview. It’s a pleasure to have you here today.

Patrick: I’m happy to be here! Hope everyone is staying well!

Sayak: Maybe you could start by introducing yourself — what is your current job and what are your responsibilities over there?

Patrick:

  • I recently founded bnh.ai. It’s a new boutique law firm that specializes in helping organizations detect, avoid, and respond to liabilities (i.e., compliance, litigation, reputational) caused by machine learning (ML) and artificial intelligence (AI). I provide leadership on the technical side of the house at bnh.ai. My partner is the legal lead.
  • ML can break the law, get a company sued, or get you or your organization called-out in the press, today. If you’d like to avoid these situations in the future or need help now, we are here for you! I also think we can expect more ML-specific regulations in the future and we can help organizations get ready for those as well.
  • I also advise H2O on how to incorporate ideas around explainable AI (XAI), interpretable models, discrimination testing and remediation, data and ML privacy, security, and model governance into their ML software products.

Sayak: That was very well-bulleted out and detailed. Thanks, Patrick. You have driven so many efforts around MLI. Would you like to mention what is the primary motivation behind it?

Patrick:

  • My motivations were commercial at first. Coming from SAS and seeing how ML projects actually worked and did not work, all over the world, I learned that explainability and deployment were the key factors in a commercial ML project’s success or failure.
  • My motivations became more human-centered later, when I got to H2O, I learned how impactful AI is to people, and that everyone would want to understand this tech.
  • Now my motivations are centered around human needs and risk. ML is great but has risks associated with it, like nearly every other technology. Interpretability is key to understanding risks. You can’t mitigate risks you don’t know about. These risks can cause people harm and prevent organizations from using ML. With bnh.ai, I want to help data scientists and organizations innovate with ML in a responsible, risk-aware way.

Sayak: Your transitions have been very methodical and practically grounded. When you were starting in the field of MLI what kind of challenges did you face? How did you overcome them?

Patrick:

  • The math was shakey in XAI at first. Many of the techniques were approximate or inconsistent. I think one of our earliest breakthroughs was to realize you would have less inconsistency and better surrogate models if you used constrained models, and then later we learned that complex ML models can really be interpretable themselves. (Thank you, Professor Cynthia Rudin). Shapley values also came later and those have been transformative because they are accurate and consistent. (Thank you, Dr. Scott Lundberg.)
  • It was really difficult and still is very difficult to communicate complex ideas about how ML models work to non-technical professionals. Through the years at H2O, we learned to simplify, simplify, simplify our user interactions. However, I think all software developers still have a long way to go in explaining explainable AI to users through software.
  • Regulatory compliance was hard in the beginning and also continues to be a pain point. The people who need transparency the most are often in regulated industries and they need real answers. Half-baked, approximate solutions are not interesting to them. This is another reason why I started bnh.ai actually. Compliance questions are extremely difficult, and cannot be solved by data scientists alone. Data scientists need legal help for compliance use cases, and you can expect ML to become more regulated over time, so more use cases will require compliance.

Sayak: I did not ever think of the compliance part like the way you mentioned. I would like to now ask a very basic question regarding your MLI workflow. After you have trained your ML model on a given dataset, is there any general framework that you follow to incorporate MLI in there?

Patrick:

  • Well’s it’s very crucial to consider interpretability in the entire workflow, (see Information 11(3), https://res.mdpi.com/data/covers/information/big_cover-information-v11-i3.png) and not just after a model is trained.
  • This includes getting domain experts, social scientists, and legal, compliance, risk, audit colleagues involved from the beginning. Sadly, I’m seeing a lot of data scientists ignore this advice right now with COVID-19. Very few people out there are qualified to do this kind of epidemiological modeling and data analysis. Honestly, I’m pretty disheartened by “armchair” or even commercial ML attempts to capitalize on this tragedy, especially being so arrogant as to think this is an ML problem and data science can solve alone, without input from medical domain experts, ethicists, and policy experts who understand applicable regulations in the healthcare and potential public health outcomes.
  • Another reason you must consider a holistic approach is because accuracy, fairness, interpretability, privacy, and security are all linked. Consider for instance a model deemed fair by some definition at training time, that is later hacked to become discriminatory, or think about trying to debug an ML black-box that is leaking sensitive private information. You can rarely have one of these attributes without the others in my opinion.
  • Such a holistic approach also includes considering AI incident response. How will you respond when your AI fails?

Sayak: Never ever thought about the last point you mentioned. The points you mentioned are very concerning indeed. If you were to enlist some resources that an MLI beginner cannot afford to miss, what would those be?

Patrick:

Sayak: That was a very humble gesture, Patrick. I would simply refer to the Awesome Machine Learning Interpretability meta-list you already mentioned. What would be the state of MLI in the next five years? Which particular areas of MLI the community would draw most interest in?

Patrick:

  • ML, like aviation, nuclear power, or other powerful commercial technologies that came before it, will likely become more regulated. Data scientists will need help with compliance and incident response for ML, and we want to be there to help with bnh.ai. The U.S. States and the Federal Government, and many nations are already enacting or proposing AI guidance, e.g., Canada, Germany, Netherlands, Singapore, the U.K. and the U.S. (Trump Administration, DoD, and FDA).
  • I hope for more and better representations of causality in model architectures and more real-world ability to conduct causal inference.
  • I expect more types of directly interpretable models for unstructured data, and less reliance on pure black-box deep learning, e.g., this looks like that deep learning models.
  • I’d like to see more interpretable deep learning models for structured data, e.g. explainable neural networks (XNN).
  • I’d also hope for the continued maturation of model debugging methods and practices, e.g. Debugging Machine Learning Models or Real-World Strategies for Model Debugging.

Sayak: I think this would definitely work as a checklist for the folks interested in pursuing MLI more. Debugging neural networks is something that excites me a lot actually. Being a practitioner, one thing that I often find myself struggling with is learning a new concept. Would you like to share how you approach that process?

Patrick:

  • I read about a new thing, then I try to apply it through code, writing or presentations, then talk to people about those, and I repeat. I really do learn a lot from different communities, both through online and real-life discussions
  • I also know I am a VERY traditional “book learner.” People learn in different ways. So I can only say what works for me. I learned about ML through textbooks and FORTRAN. (I’m probably in the last generation to do so.) I’d actually turn this question around to you, Sayak, to hear what people do these days.

Sayak: Haha, that is a fun fact to know (FORTRAN, really!). If you ask me I am also more of a reader, I prefer learning by reading books cover to cover. I also like reading through an implementation when I struggle to understand a concept. Finally, any advice for beginners?

Patrick:

  • Don’t trust the internet too much. If you have customers listen to them. If you have mentors, listen to them. At SAS, I had lots of customers and mentors and they told a very different, and more true, story about data mining and ML, than did Twitter, Quora, Kaggle, or Medium. At that time, ML discussions on the web were absolutely dominated by deep learning on benchmark datasets like MNIST, CIFAR, and ImageNet. Almost nothing could have been less interesting to my mentors and customers at that time.
  • These deep learning models, aside from being unintelligible black-boxes, were simply not deployable in any meaningful way. Very few, if any, enterprise database systems or scoring engines could run Python at that time, much less CUDA. Moreover, results were being reported on static, well-known, benchmark datasets, and not on live, new, unseen data. I’ll always remember this example of how the hottest thing on the web was near meaningless on the ground and I ended up reflecting on this experience a few years later in The Preoccupation with Test Error in Applied Machine Learning.
  • Try to follow the scientific method. Form a hypothesis then conduct an experiment.
  • I only know because I do this myself, but data scientists often fall prey to confirmation bias. We snoop around a data set, talk to our colleagues, then devise a hypothesis to test through a modeling experiment. That’s not following the scientific method, and you’re likely to find what you or your colleagues thought would be in the data, whether it exists in reality or not because confirmation bias is such a strong type of bias.
  • Personally, I interpret the scientific method as forming a hypothesis, collecting appropriate data, performing an experiment, and then analyzing the results to determine if the hypothesis is true. I’m really not sure how combing through redundant, noisy, incomplete data, collected for some operational purpose (i.e., “data exhaust”), looking for a hypothesis to test, and confirming that hypothesis with an overfit black-box model came to be called data “science.”
  • All of us, data scientists, should learn more about experimental design techniques.
  • Engage with the communities in which you are interested politely and professionally. Ask them questions.
  • Be aware of history:

Fifty Years of Data Science

50 Years of Test (Un)fairness: Lessons for Machine Learning

Statistical Modeling: The Two Cultures

A Very Short History of Data Science

  • Treat Kaggle as a learning platform, not a religion. Kaggle is a game. Real-world ML is not. And if you treat real-world ML like a game, you could do more harm than good.

Sayak: Thank you so much, Patrick, for doing this interview and for sharing your valuable insights. I hope they will be immensely helpful for the community.

Patrick: I hope it’s helpful too because I’ve learned so much from the data science community over the years. Thank you Sayak for your patience and interest … and for your PRs!

--

--