IARPA: Disbelief to Doubt Logo

IARPA Retrospective: Lessons Learned from 10 Years of "IARPA Hard Problems" with Former Director Dr. Jason Matheny Part 1

IARPA: Disbelief to Doubt
S01 E02 November 04, 2024
IARPA: Disbelief to Doubt Logo

IARPA Retrospective: Lessons Learned from 10 Years of "IARPA Hard Problems" with Former Director Dr. Jason Matheny Part 2

IARPA: Disbelief to Doubt
S01 E02 November 18, 2024
IARPA: Disbelief to Doubt Logo

IARPA Retrospective Teaser

IARPA: Disbelief to Doubt
S01 E02 October 24, 2024
IARPA: Disbelief to Doubt Logo

Leap of Faith (Episode 1 Part 1)

IARPA: Disbelief to Doubt
S01 E01 September 09, 2024
IARPA: Disbelief to Doubt Logo

Force Multipliers (Episode 1 Part 2)

IARPA: Disbelief to Doubt
S01 E01 September 23, 2024
IARPA: Disbelief to Doubt Logo

Leap of Faith (Episode 1 Teaser)

IARPA: Disbelief to Doubt
S01 E01 April 25, 2024
IARPA: Disbelief to Doubt Logo

Disbelief to Doubt Podcast Trailer

IARPA: Disbelief to Doubt
S01 E00 April 25, 2024
IARPA: Disbelief to Doubt Logo

IARPA: Disbelief to Doubt

1 Season 7 Episodes

The IARPA: Disbelief to Doubt podcast explores the history and accomplishments of the Intelligence Advanced Research Projects Activity (IARPA) through the lens of some of its most impactful programs and the thought leaders behind them in IARPA's technical Offices of Collection and Analysis. In each episode, IARPA leadership, Program Managers (PMs), technical staff, and research performers will offer candid insights into their personal journeys, what attracted them to IARPA, and how the unique mission of the organization enables them to be force multipliers as they tackle some of the Intelligence Community's most difficult challenges.

In this episode of Disbelief to Doubt we turn back the clock to 2018 and sit down with former IARPA Director Dr. Jason Matheny who served as IARPA director from 2015 through 2018. In part one of this two-part interview we discuss a wide range of topics, including Jason's early career in science, how being a "force multiplier" drew him to IARPA, lessons learned from IARPA's first 10 years of tackling "IARPA Hard" problems, and the role of the U.S. as a leader in artificial intelligence (AI) research and development. 

Timestamp Caption
00:00:00
IARPA: Disbelief To Doubt Podcast

Episode 2: IARPA Retrospective with Former Director Dr. Jason Matheny: Lessons Learned from 10 Years of “IARPA Hard” Problems

Guest: Dr. Jason Matheny

Dimitrios Donavos: IARPA sponsors research that tackles the intelligence community's most difficult challenges and pushes the boundaries of science. We start with ideas that often seem impossible and work to transform them from a state of disbelief to a state of just enough healthy skepticism or doubt that by bringing together the best and brightest minds, we can redefine what's possible. This podcast will explore the history and accomplishments of IARPA through the lens of some of its most impactful programs and the thought leaders behind them. This is IARPA, Disbelief to Doubt.

Dimitrios Donavos: Welcome back to IARPA: Disbelief to Doubt. I’m your host, Dimitrios Donavos. In this retrospective episode, previously recorded in 2018, we sat down with former Director Dr. Jason Matheny, who served as IARPA’s director from 2015 through 2018, and is currently the president and CEO of the Rand Corporation. In part one of this wide-ranging interview, we spoke with Jason about his early career in science, how empowering program managers to be force multipliers drew him to IARPA, some lessons learned over the organization’s first decade, funding breakthrough research, and much more. Take a listen.

Charles Carruthers: I'm Charles Carruthers with the Office of the Director of National Intelligence, and joining us today is the Director of the Intelligence Advanced Research Projects Activity, Dr. Jason Matheny. Jason, welcome.

Jason Matheny: Thanks, Charles.

Charles Carruthers: I'll be talking with Jason about topics such as IARPA's milestones under his tenure and beyond, as well as a little bit about his background and his personal reflections on being director of the organization charged with bringing innovative solutions to the intelligence community. So, let's get right into it.

Jason, you've been in the scientific community for a number of years now. And while we know you as our director and have a general idea of your professional accomplishments, we thought we'd start off with a sort of fun question, and that is, how was it, and when was it, that you decided to become a scientist?

Jason Matheny: Well, I started out my career in public health, in international health, focused mostly on infectious diseases. And I certainly had not anticipated going into national intelligence or a scientific organization that supported national intelligence. It wasn't until 2002 when I was in India working on malaria and tuberculosis and HIV. And while I was there, a group in the United States synthesized from scratch the first virus from its chemical constituents.

And this was really the first demonstration that you could synthesize a virus de novo. A lot of us in the public health community saw this as a milestone that raised a lot of concerns. Some of the people that I worked with had been part of the effort in the 60s and early 70s to eradicate the smallpox virus.

And the reaction that they had was, look, you know, we spent decades eradicating this virus from Earth, and now it might be possible in the future that somebody could create this virus from scratch. Or something worse, something that might exceed, say, the evolutionary inventions of novel viruses to produce something that was deadlier or more communicable.

So I moved then from working on traditional public health and infectious diseases to working in biosecurity. First at the Center for Biosecurity and the Johns Hopkins Applied Physics Laboratory. That eventually brought me to work on projects for the Defense Department and the intelligence community. And then what brought me to IARPA was that I admired Lisa Porter and Peter Heinem, who were at IARPA and helped start IARPA and who believed in applying rigorous science to important security problems.

So I wanted to become a program manager here because I wanted to have a multiplier effect being able to fund outstanding scientists and engineers to work on solutions to the problems that I really worried about, including problems related to biosecurity, but also a range of other things that were keeping me up at night.

And one way of describing that multiplier effect that we sometimes talk about in public is that we're sort of like a crowd-sourced version of Q-Branch from the James Bond movies. So in the movies, you've got this guy in a lab coat who's kind of snarky and gives James Bond all of his shoe phones and other gadgets.

But our problems are way too complex and too numerous to just have a few guys and lab coats working on these problems. So really the most efficient way for us to solve these problems is to crowd source them to the broader science and technology community. And that's what we've succeeded in doing. We now work with 500 plus organizations around the world to solve our biggest intelligence problems.

Charles Carruthers: And DARPA has the same business model, correct?

Jason Matheny: That's right. So, yeah, we're based on the DARPA model, which has been so successful for the last 60 years in developing breakthroughs for national defense. And 10 years ago, there was recognition that the intelligence community needed its own DARPA. It needed a DARPA-modeled organization that focused on science and technology needs for national intelligence.

And, you know, with this being our 10th anniversary, looking back, I think we've successfully applied that model from DARPA to a range of hard intelligence problems and analysis and collection and counterintelligence in ways that I'm just deeply impressed by because we're able to tap the world's scientific engineering expertise.

Charles Carruthers: You mentioned that synthetic biology is one of the topics that keeps you up at night when you badge out and go home after work. Do you often speak about pandemics, synthetic biology? Do you think the IC, do you think we place enough emphasis on research areas in that field going forward?

Jason Matheny: Well, I think we're entering an especially risky period of biotechnology where the offensive applications of the technology have an advantage over the defensive applications. As one example, last year a pair of scientists were able to synthesize from scratch a pox virus about the same length as the virus responsible for smallpox. That they accomplished that for $100,000 using commercially available equipment, methods and expertise.

Just two guys for $100,000. In contrast, developing a new vaccine costs about a billion dollars. So you've got this 10,000-fold asymmetry between developing a novel virus and developing a new vaccine that addresses that virus. It's a really hard challenge to address what's going on in biotechnology.

The upside potential is enormous. You've got all sorts of innovations that can help enable breakthroughs for human medicine, for agriculture, for materials, for energy. And I think the net benefits are going to be extraordinary for the next few decades. But we have to safely navigate the risks. And the risks are either intentional or accidental novel pathogens that come from laboratories and may have properties that are substantially worse than natural pathogens.

Charles Carruthers: Interesting. If we could back up a little bit and talk about your career at IARPA, prior to you becoming the director, you started as a program manager here. Can you speak to this experience and what excited you about joining this organization and becoming a program manager and how it kind of prepared you for your current role as our director?

Jason Matheny: Well, I think the thing that most attracted me to the program manager role at IARPA was that there were problems that I as an individual researcher felt like I couldn't solve. I wasn't smart enough and I didn't have a way of multiplying myself and my hours such that I could accomplish what I wanted to.

So coming to IARPA as a program manager, the attraction was that I could multiply my research effort by funding a hundred scientists and engineers who were smarter than me, who could spend more time working on this problem than I could alone and solve problems that I wouldn't have been able to. So that's what attracted me and I think the experiences that I had as a program manager are some of the things that I'm proudest of.

I mean, I think probably the thing that I'm most proud of is just the, you know, the people who I've been able to work with, including the scientists and engineers who we've funded to work on many of our programs, and then also the people who we've hired at IARPA who are themselves extraordinary in being able to frame hard problems, provide those problems in a way that thousands of scientists and engineers can work on them and contribute to key intelligence missions.

There are a few programs that I was directly involved with that were so rewarding to work on. I mean one was the work that we funded on improving human judgment and in some of that work, say in the ACE program, I got to work with some of my scientific heroes people like Phil Tetlock and Barb Mellers, Danny Kahneman, and they develop methods that substantially improve human judgment about important classes of geopolitical events, the kinds of events that analysts are asked to create judgments on.

Charles Carruthers: And Phil Tetlock, he was the author of Superforecasters, correct?

Jason Matheny: That's right, yeah. His book with Dan Gardner describes the ACE program in great detail and talks about what we learned from that program. Really amazing research, and that they were able to recruit thousands of volunteers who gave up their time in order to make millions of forecasts about real-world events and keep score.

And I think running those kinds of large forecasting tournaments is something that we've now used in a range of research programs here to test a variety of theories and political science and epidemiology and cybersecurity. And our interest in this really grew from some early experiences at IARPA, which is we got large numbers of business pitches where people came in and said that they could have predicted 9-11 and they would have you know these very pretty PowerPoint slides showing basically their model being run in reverse.

Charles Carruthers: Kind of easy to predict history that's already occurred.

Jason Matheny: That's right. Yeah much easier to predict history than to predict the future and we didn't think then that the PowerPoint slides were probably a fair way to measure whether or not these methods were likely to be effective.

So we started inviting the folks who were making these pitches to send us forecasts for things that hadn't happened yet so that we could measure the number of false positives and numbers of false negatives. And I'm sad to say that of the companies that claim to be able to predict the future, not many agreed to submit forecasts in these tournaments.

But lots of research organizations, especially colleges and universities and R&D offices within science organizations have signed up because they want to answer a really important scientific question, which is what kinds of events are in principle possible to forecast and what kinds of events are essentially a coin flip, which is an important thing to know about the world so that you can be appropriately humble.

Charles Carruthers: Right.

Jason Matheny: And I'll say, you know, for anybody who wants to participate in this kind of research...

We have three current forecasting challenges that are open to public participation. That includes the hybrid forecasting competition, the geopolitical forecasting challenge, and the CREATE program, all of which involve crowdsourcing to improve intelligence analysis, and anyone on earth can participate.

Charles Carruthers: That's good to know. So if we could back up just a little bit. You mentioned that it was working on hard problems that drew you to IARPA. Let's talk about risk for a moment.

IARPA balances its portfolio with programs that often mean taking on a great deal of risk. Sometimes that means embracing failure. Can you speak to what makes a problem at IARPA hard and why as an organization we are uniquely positioned to succeed even in the face of seeming failures along the way?

Jason Matheny: Yeah, that's a great question. I'll give a couple of examples of programs that were a great scientific success and that they contributed to our knowledge even though they didn't achieve the technical goals that the program was aiming at.

And we've been lucky to have incredibly rigorous and honest program managers, Adam Russell and Alexis Jeannotte, who are two program managers who have sort of de-hyped two different theories.

And one was that oxytocin, a chemical that was believed to be centrally responsible for interpersonal trust.

Charles Carruthers: Okay.

Jason Matheny: It just didn't bear out through some very careful experiments in a program called TRUST that IARPA ran. The hypothesized role for oxytocin just didn't hold up to scrutiny. A second theory that they dehyped was that techniques like cognitive training and transcranial electrical stimulation could be reliably used for cognitive enhancement.

And there are some specific cases where those techniques work. But after collecting the world's largest data set on the effects of these interventions on a range of cognitive tasks, all very carefully done, the effects were weak and inconsistent. And I think work like this, in which you earnestly pursue scientific truth, even when the results are negative ones, and you publish those negative results. Those are some of the things that we're really proudest of at IARPA.

If everything that we fund succeeds, then we're either picking problems that are too easy or we're just not being honest with ourselves. So I'm always reassured when I see a percentage of our programs fail to meet their goals because of technical ambition.

Charles Carruthers: Why do you think there is a reluctant to publish failure?

Jason Matheny: Yeah, I think there's a lack of professional reward for publishing a negative result. One is that scientific journals are often reluctant to publish it because reading about things that didn't work is less fun than reading about things that did work.

Everybody wants to have a splashy, revolutionary success.

Charles Carruthers: Sure.

Jason Matheny: But we learn as much science from the negative results, and in some cases we learn more from those than we do from the big splashy positive results.

Charles Carruthers: So in that vein, young aspiring scientists, they're essential to the viability of IARPA's future. Yet, as you pointed out, many of them are coming out of these institutions that focus on more traditional scientific and funding models where success is the focus. If you were speaking to one of these next generation scientists, how would you inspire them to think big or outside of the box?

Jason Matheny: So one thing I wish that I had focused on earlier in my career was take on the problems that I thought were the most important problems to focus on. There's an essay that I wish I had read years earlier by Richard Hemming called “Your Research in You,” in which he basically makes the argument that if you're not working on what you think is the most important problem, then you're wasting your own time and you're probably wasting somebody else's money.

You know, it's often more convenient to focus on problems that have sort of low-hanging solutions, but we need more help working on the hardest and most important problems. And I would say one category of problems that are most important for us to solve are the kinds of problems that represent existential risks to the nation and to human society.

So if you think about like the last, you know, 200 years say of human history, things have been on a really nice upward trend, right? So life expectancy has doubled. You've had more than a tenfold increase in average incomes on average in the world. You've had, you know, a 90% reduction in infant mortality, a 90% increase in literacy.

If you look at these sort of basic indicators of human health and welfare, everything's on the up and up. And if you look at humanity's ability to be resilient in the face of tragedy and conflict, I mean look what we've weathered just in the last century. So we had two world wars. We had the 1918 influenza pandemic, which killed more than 50 million people in a single year.

Charles Carruthers: Wow.

Jason Matheny: Probably the most significant mortality event in human history per unit time. And we muddled through, right? So, I mean, we're still here. Civilization kept going forward. So what are the kinds of risks that we would not recover from as a society? And some of those risks are ones that national intelligence is really focused on, on both estimating the risk and figuring out ways to counter it. So you've got the risk of nuclear war, whether intentional or accidental, and nuclear proliferation.

You've got the risks of global pandemics that could be natural, intentional, or accidental, particularly due to developments in biotechnology. You've got cyberattacks potentially against critical infrastructure like our power grids or our financial systems. These, I think, represent real catastrophic risks that for scientists and engineers who are looking for hard problems to solve that will really matter to the future of our country and to our society. Those are ones that I would focus on and we need a lot of help.

Charles Carruthers: IARPA's CAUSE program immediately comes to mind when it comes to cybersecurity.

Jason Matheny: Yeah, the CAUSE program is a great example of a non-traditional approach to a very hard problem. The problem is how do you detect and even forecast cyberattacks?

In general, we learn about cyberattacks months after the fact. And, you know, many of the most widely publicized cyberattacks were discovered many months after they happened. So the goal of the CAUSE program is can we detect the early indicators of cyberattacks during their planning stages.

And to give a few examples, the CAUSE program is looking at patterns and chatter and hacker forums. It's looking at the black markets for malware that tend to follow the same laws of supply and demand as other markets. So if you've got people who are trying to buy a bunch of zero-day viruses on the black market, the price for these viruses goes up.

Charles Carruthers: In the CAUSE program, they'll be using artificial intelligence, correct, to do this?

Jason Matheny: That's right, yeah. Machine learning is a big part of that program because you simply can't analyze that volume of data just using human eyeballs.

Charles Carruthers: So on that topic, I know you've been at the forefront of discussions about the rise of artificial intelligence and who's winning the race on advancing it. What are some of the advantages you think the United States has that will allow us to stay a step ahead?

Jason Matheny: Well, the U.S. has a lot of advantages right now compared to the rest of the globe. I'd say principally our university system, which is unrivaled in the world, and draws in the best and brightest students from around the world. We also have a highly competitive marketplace in AI and machine learning, which is a net advantage. It's not a perfect system, but if an idea is stupid, there are pretty strong financial motivations to discover that it's stupid sooner rather than later and find some other method that works better and is gonna be more profitable.

I'd contrast that with the system in China where the government essentially picks the winners in advance and in the field of AI, China has announced that it's picked the winners. It's picked four companies essentially to lead AI within China.

So the downside of that approach that's being used in China is that the companies could go to some blind alleys and there won't be a competitor to take them down or to correct.

Charles Carruthers: Interesting.

Jason Matheny: So they can race off in the wrong direction without correction. They also, though, can race off in the right direction and push more energy into that more quickly than, say, a market-based system.

So, it depends in part on are they picking the right direction and how many surprises will there be along the route. My money would be on the United States being the world leader in AI for some time to come. If the U.S. makes the continued investments in research that have been responsible for its historical success and leadership in AI.

And you know, part of that is the university system ensuring that federally funded research of university R&D continues. There is a need for continued investment and for thinking very carefully about how we ensure that we attract and retain the world's strongest scientists and engineers in AI, which is not a given.

Charles Carruthers: Let's talk about oversight. What role, if any, do you think the United States government should be playing in oversight of artificial intelligence?

Jason Matheny: Well, right now, there's, I mean, this is very early on in the technology, and I think it's easy to get to overhype what AI developments are taking place in laboratories.

Charles Carruthers: So no rise of the machines, right? No Terminators.

Jason Matheny: Yeah, no Terminators. And I think in general, the concerns about Skynet

should probably be replaced instead by concerns of like digital Flubber.

I mean basically systems that are pretty stupid but are just misdirected or fragile. So one of the things that we worry about at IARPA is not the sort of Terminator scenario but instead off-the-shelf systems that are either poorly trained, they're trained on biased data that don't represent the real world, or there are vulnerabilities within those systems that are fairly easy for an intelligent adversary to attack. And some of those kinds of vulnerabilities are now widely published.

Just in the last few years, there's been a lot of interest on so-called adversarial examples in which you can spoof an AI into thinking that a picture of a school bus is actually a picture of an ostrich or a picture of a tank or whatever.

Charles Carruthers: And that's done with very little effort.

Jason Matheny: That's right. Yeah, this is stuff that high schoolers can now do in their spare time. It's become sort of a favorite parlor trick of budding computer scientists is to fool the state of the art classifier into making a mistake. And unfortunately, it's really, really easy. And we have not taken the kind of approach to AI that we have with cyber systems in general.

We really need to become more cynical and a little more paranoid about how these kinds of systems can be attacked, at least systems that are likely to be placed on critical infrastructure or really important networks where making a mistake is catastrophic. So I think this is one area where we've been pushing a lot on finding technical approaches to address these vulnerabilities within AI systems.

Dimitrios Donavos: Thank you for joining us. For more information about IARPA, and to listen to part two of this two-part series, visit us at iarpa.gov. You can also join the conversation by joining us on LinkedIn and on X, formerly Twitter, @IARPANews.

[00:00:00]

Tetlock, P. E., & Gardner, D. (2015). Superforecasting: The art and science of prediction. Crown Publishers/Random House

"You and Your Research," by Richard W. Hamming, presented at the Bell Communications Research Colloquia Series on March 7, 1986.

Wikipedia contributors. (2024, August 29). Joy’s law (management). Retrieved from https://en.wikipedia.org/wiki/Joy%27s_law_(management)

IARPA Neuroscience Programs (TRUST, SHARP)

IARPA Superforecasting Programs (ACE)

IARPA Cybersecurity Programs (CAUSE)

IARPA  AI Programs (TrojAI)

IARPA Biotechnology Programs (FELIX)

Referenced IARPA Funding Mechanisms Seedlings

Previous Prize Challenges

IARPA Research Technology Protection 

ODNI 100/500 Day Plans

In this episode of Disbelief to Doubt we continue our conversation previously recorded with former IARPA Director Dr. Jason Matheny, who served as IARPA director from 2015 through 2018. In part two of this two-part interview we discuss the impact of IARPA's mission on national security and science, overcoming the challenges of technology adoption, Jason's parting wisdom on future trends and challenges in national security research, and much more.

Timestamp Caption
00:00:00
Dimitrios Donavos: IARPA sponsors research that tackles the intelligence community's most difficult challenges and pushes the boundaries of science. We start with ideas that often seem impossible and work to transform them from a state of disbelief to a state of just enough healthy skepticism or doubt that by bringing together the best and brightest minds, we can redefine what's possible. This podcast will explore the history and accomplishments of IARPA through the lens of some of its most impactful programs and the thought leaders behind them. This is IARPA, Disbelief to Doubt.

Dimitrios Donavos: Welcome back to IARPA, Disbelief to Doubt. I’m your host, Dimitrios Donovos. In part 2 of this two-part episode, we continue our conversation with former director of IARPA, Jason Matheny, previously recorded in 2018. We spoke with Jason about the impact of IARPA’s mission to national security in science, the promise and perils of artificial intelligence, Jason’s parting wisdom on future trends and challenges on national security, and much more. Take a listen.

Charles Carruthers: Let's switch gears and talk about innovation a little bit. As you meet with research partners, what is most encouraging about the pace of innovation outside the intelligence community? And what can we do as an organization to assure we continue to play a significant role in guiding the future of technological innovation while staying ahead of the curve?

Jason Matheny: Well, I think I'm a big proponent of Bill Joy's law, which is most of the smartest people work for somebody else. And no matter what organization you work for, that's generally going to be true, that there's just a whole lot more experts outside than inside.

So the first thing that I think the intelligence community and the broader national security community needs to do is to find ways of openly working with scientists and engineers who are in academia, who are in industry, who are in national labs, and finding ways of leveraging their extraordinary talents and focusing some of their brain power on these problems of national and global importance. I think part of that is the need to be open about what our problems are. If you want to get help in solving your problems, you have to find a way of advertising what those problems are. And our problems in national intelligence are complex and they're going to require lots of hard work by lots of people who work in lots of disciplines.

So we have to find ways of collaborating with these organizations, some of whom have haven't worked with the intelligence community before. And that's part of why IARPA is particularly open and outward facing, is to recruit more brain power. We have a variety of ways of reaching those researchers, one is through our research programs which are multi-year, multi-organization, multi-discipline, very large research efforts typically in the tens of millions of dollars spread over several years.

But we also have smaller investments in research that are faster, that have shorter turnaround times, often sort of less bureaucracy involved. One of those are seedlings that we fund which are typically 12-month studies to investigate a really early-stage idea to get us from disbelief to doubt on a concept before we make a larger investment. And then we also fund a variety of prize challenges.

Charles Carruthers: Right.

Jason Matheny: And I love these prize challenges in part because they're so cost effective. I mean, you can motivate so much great effort with a relatively small cash prize. And they're lightweight. They don't require that somebody submit a proposal or that somebody have approved accounting system.

Charles Carruthers: And they're non-contractual.

Jason Matheny: They're non-contractual, that's right. They don't have to sign any paperwork. If they have a solution to a problem and they submit that solution to us, and that solution works better than anybody else's solution, then we give them a check for the prize money. They could be a hobbyist working in their basement. They could be moonlighting. We just want to find the best solutions.

We find that lots of people want the bragging rights of the solution of having solved a really hard problem, or they're intellectually curious about the problem. One of the amazing things is that they can be extraordinarily cost effective compared to traditional contracts. We've had cases where you had a group of hobbyists who were doing this in their spare time.

You know, in their full-time job they might have been astrophysicists and at night they're working on a really challenging data science problem that we've posed and collectively they end up beating you know a major defense contractor for like one one hundredth of the price. We love seeing those kinds of innovations happen in places that you wouldn't expect.

So I think you know one of the great things about IARPA is just the number of prize challenges that we that we generate and post. At any given time we have about four that are publicly posted and cover a wide variety of disciplines.

Charles Carruthers: The one thing that I think is very unique about prize challenges is that in some aspects it allows us to establish what the state-of-the-art is in a particular field.

Jason Matheny: Yeah, that's exactly right. I mean, when some of our programs have elected to have prize challenges running in parallel so that you can see what's the state of the art right now outside of this research program so that you can continually benchmark yourself against the state of technology that's already operating in the environment. So it's a really good way of having an external evaluation system.

Charles Carruthers: So in the vein of openness and transparency, since you became the director of IARPA, our level of transparency and engagement to the public, the media, transition partners, has greatly increased. An increase in S&T reporters are covering an increased amount of IARPA research opportunities, which I think is really great for the public. Should we strive for even more transparency?

Jason Matheny: Well, I think much of the credit for the greater coverage is due to you, Charles, and your team. And I think that transparency benefits us due to the greater level of engagement that we get from scientists and engineers working in more organizations.

So we've seen the increase in diversity of organizations working with us on our programs. And we've also gotten a lot more interest from the general public in the kind of research that we fund.

That kind of openness then I think is a net positive, but I'll mention the downside of openness, which is that we have to be very careful about not giving away the farm and making the right kind of security determination in advance of any research effort of what we can afford to share and what we can't afford to share. You know, by temperament, I'm somebody who's sort of inclined towards openness, as you know, most scientists and engineers want to share what works and what doesn't work because they know that that's how science advances. But we have a lot of challenging security issues that we can't always share.

So I think one thing that I'm really proud to have inherited at IARPA is a strong research and technology protection process that was set up by the previous directors that establishes really clear guidance on the kinds of technologies that we need to protect, the kinds of technologies that we can afford to share. And not only has this process been very useful to us, but it's also now being adopted by a range of other agencies.

There's another side of this that I think is important too, is that we need to be really deliberate, not only about the technologies that we invest in, but also the technologies that we don't invest in. And one set of questions that we’ve started to ask before we invest in any research program at IARPA, are questions about will we regret this technology if we successfully develop it? How could this technology be used against us? Can we develop defenses against misuse? Can we build into a system a way to prevent somebody from misusing the technology in ways that are destructive?

And I think being very purposeful about the ways in which we develop technology, and in some cases consciously don't develop technologies, is important.

Charles Carruthers: Speaking of the development of technology. Is there anything in terms of innovation that you wanted to explore more fully, or make an impact, in either inside or outside the IC, that you didn't get a chance to do? Something that got away.

Jason Matheny: Yeah, I think there's a couple things. So one is I'm really proud that in the last couple of years, we've increased our investment and research on what I view as some of the more catastrophic risks facing national security, including bio risks, and now new work on addressing some nuclear risks, as well as our work on cyber risks.

I think one of the other things that I wish I had had more time to pursue is thinking about how we in science funding organizations like IARPA make decisions about what to fund. And we use a review process like most federal funding organizations do that's not based on really good science about human judgment.

So, I’ll give you an example, which is every review panel, whether it's at the National Science Foundation or DARPA or IARPA or the National Institutes of Health, each of those review panels has a bunch of people essentially deliberating as a group about what sorts of proposals they want to fund.

Everything that we know from the science of human judgment is pretty much the worst way of reaching a good conclusion is to have a bunch of people deliberating as a group because it's prone to all sorts of bias, group think. So we have not been very scientific in the way that we review science. And I wish that we had run some experiments to figure out what are other possible ways that we as a funding organization could approach this problem of deciding which projects to fund.

Because my guess is that all of us, and it's not limited to government, I mean foundations and industry organizations typically use the same kind of process of group deliberation.

But we're probably missing great bets in funding science that for whatever reason didn't make it through a review panel, either because it was viewed as too risky by one member who happened to talk loudest, or it just didn't have the right principal investigators attached to it. So I think figuring out ways of testing different hypotheses about what's the best way to fund science is something I wish I had time to pursue as a program manager.

One other thing which is, you know, I think we need to find better ways of understanding the global financial system.

Charles Carruthers: Hmm, interesting.

Jason Matheny: And you know, the global financial system is incredibly complex. I mean, you know, maybe like the single most complex artifact that human society has generated in terms of the number of actions that take place per second that involve multiple levels of organization and disorganization.

And it's remarkably well-functioning for something that's that complex, but we don't understand very much about its vulnerabilities. I'll give a couple of examples. One is that there are these historical cases of flash crashes where algorithmic traders, these automated traders that are operating using machine learning, detect signals that in hindsight were incorrect.

And then led to a massive domino effect that led to significant declines, in one case the largest single decline in the Dow Jones Industrial Average in a single day, just due to essentially computers run amok in misreading signals taking place within the market.

Charles Carruthers: And we're talking about billions of dollars of loss here.

Jason Matheny: That's right, billions of dollars.

The potential for these complex systems to make errors is there, the potential for the errors to go uncorrected for periods of time is also there because we don't understand the ways in which these errors propagate within these complex networks.

So we did a little bit of work in one of the programs that I worked on to model some of these large economic shifts. And we didn't get very far. I think this was sort of hubris on my part to think that if we funded a couple million dollars of research and financial modeling that we would be able to outperform the billions of dollars that the big financial organizations are spending on trying to predict market movements.

But if anybody could actually consistently predict market movements, they wouldn't be billion-dollar companies. They'd be like trillion-dollar companies. So the fact that we don't have any trillion dollar companies suggests that right now this is too complex of a system to accurately model.

But I think in the future, we'll need to find ways of at least modeling some of the most catastrophic events and misuse of markets. So the prospect that people could lead economic attacks by essentially misleading these algorithmic traders is something that I worry about. And I think we probably need to do more to look at the forensics on whether you would be able to detect such an attack.

Charles Carruthers: Let's switch gears here. As human beings, we share a natural inclination in resisting change. Heading one of the most innovative organizations in the intelligence community, it must often lead the way in spearheading change. What do you draw from your own experiences that help you overcome this natural resistance to change?

Jason Matheny: Well, I think we're lucky in that at IARPA, our full-time job is to be thinking about change and thinking about what we should be changing our methods and technologies to. So we're charged with looking over the horizon in order to imagine something that doesn't yet exist. I think that the challenge is in translating that change into something that is practical for other organizations to adopt.

And that is something that I routinely underestimated the difficulty in moving even very well-tested methods into large organizations that have a diverse set of institutional pressures. So good technology helps, but the institutional incentives also need to be there in order for the technology to be adopted.

Some things that we've done that I think have improved transition, one is we hired a chief of technology transition to focus full-time on getting new technologies transferred into operational agencies. And the team supporting Mary Ann, who's the chief, is outstanding, and they've tripled the rate of these technology transitions that are going into other agencies. So we're getting more and more technology over the fence.

But it's still a constant challenge just due to inertia. I mean, it’s harder to adopt a new method that it is to continue with the existing method even if you know that the new method would be substantially better.

Charles Carruthers: Sure.

Jason Matheny: The other thing that I think helps us in trying to figure out like how to get technology into organizations to help those organizations address some of their hardest problems is finding local champions. And I think in general, some of our biggest successes in getting technology over the fence has been in sort of guerrilla campaigns where you've got junior officers who are really passionate about adopting this technology and they make the technology available to their colleagues and pretty soon you've got like a thousand analysts who are using this technology without requiring that you get lots of people to agree as a matter of policy to adopt this technology.

You've got analysts using it or collectors using it and I think that kind of grassroots approach has been one of the reasons that our technologies have been adopted because we get the buy-in from users.

Charles Carruthers: Jason, you've been a program manager, an office director, and the director of IARPA. If you could go back in time and have a conversation with your younger self before you entered IARPA, what would that conversation entail?

Jason Matheny: Well, first I would suggest some great stock tips to that person. Actually, I think this is the basis for every like back to the Netflix or like back to the future time paradox. But I would probably tell my younger self to continue doing what worked for me pretty well, which was just cold calling people who I most admired and wanted to work with.

So people like Tara O’Toole and Tom Inglesby, Andy Marshall, Adam Russell, Richard Danzig, Peter Highnam, Lisa Porter, all these people responded to cold calls and were charitable enough to let me work with them on interesting projects.

And if I replayed history a hundred times using the DeLorean, in most of those histories, I'm doing something less interesting because I neglected to cold call one of those people.

Charles Carruthers: Gotcha.

Jason Matheny: I'd also advise myself to just start working earlier on what I thought were the most important problems. And I think I just didn't have the intestinal fortitude to think that maybe I should work on some of those problems earlier.

But I'm really so, but I admire so much the sort of next generation of scientists and engineers who are unafraid to have the ambition of, you know, saving the world from the start of their careers. And I see these people at, you know, conferences like the Effective Altruism conferences where you've got people who, you know, literally want to save the world from some of our biggest risks and biggest challenges and start, you know, when they're 18, not start when they're, you know, in my case, like late 20s, early 30s. I think that's great. They’ve got a full decade of greater impact that they're going to have as a result.

Charles Carruthers: Let's look ahead. What about your successor? What advice would you give her or him?

Jason Matheny: Well we're very fortunate in that the people who preceded me set up a very strong foundation for IARPA to lead revolutionary and rigorous research. And it meant that I didn't have to do much to keep it focused on the most important problems. We already had the process there. We already had an approach from DARPA that's been so successful. And we had many of the policies and authorities needed to sustain IARPA's mission.

What I think the next director is going to be able to do as a result is to fine tune those processes so that we're ever more agile, so that we can reduce the time that it takes to get a new project started, to get a new scientist or engineer the resources that they need to do outstanding work. And they're going to have some new emerging risks that they'll have to contend with.

Charles Carruthers: Right.

Jason Matheny: Unfortunately, the technology that we worry about, things like biological weapons or hypersonics or adversarial attacks involving machine learning, those technologies don't stand still. So the next director is going to have to address those as well as whatever comes next.

Charles Carruthers: Jason, you've pretty much traveled the globe speaking about IARPA and its unique mission and your role as director. Can you share some of your funniest moments, something that might not be widely known?

Jason Matheny: Well, if you work for an intelligence agency and your email addresses are posted online, as ours are, then you're bound to get some interesting mail. I mean, we routinely get the messages that say things like, can you please stop interfering with my microwave because I haven't been able to cook for days.

But I think one of the more interesting proposals that we received was that someone claimed to have access to a set of crystals that allowed him to communicate directly with a set of gods.

Charles Carruthers: Interesting. Crystals?

Jason Matheny: Crystals. And he didn't specify which gods. And we didn't check any sources on this. We didn't communicate with the gods to make sure that this guy had an exclusive line with them. But he claimed that these crystals would give him perfect information about the location of nuclear weapons.

And for access to this information, the individual suggested that we could give him a $10 million a year consultancy fee. And in hindsight, that sounds like a bargain.

Charles Carruthers: Yeah, it's a pretty nice return on investment.

Jason Matheny: It is, I mean, I passed on it, but I did suggest that he volunteer in one of our forecasting tournaments.

Charles Carruthers: We might have to look into these god crystals.

Jason Matheny: Yeah, we will. I think, you know, I don't know if he's like competing right now and, you know, the hybrid forecasting challenge or the geopolitical forecasting prize challenge that we have. But if he's listening to this, good luck. And for others, here's your opportunity to beat the crystals by volunteering in one of our tournaments.

Charles Carruthers: Jason, this has been great. Thank you for an insightful and fun discussion. And thank you to our listeners for joining the conversation.

Jason Matheny: Thank you, Charles. This is great.

Dimitrios Donavos: Thank you for joining us. For more information about IARPA, visit us at iarpa.gov. You can also join the conversation by following us on LinkedIn and on X, formerly Twitter, @IARPANews.

[00:00:00]

Tetlock, P. E., & Gardner, D. (2015). Superforecasting: The art and science of prediction. Crown Publishers/Random House

"You and Your Research," by Richard W. Hamming, presented at the Bell Communications Research Colloquia Series on March 7, 1986.

Wikipedia contributors. (2024, August 29). Joy’s law (management). Retrieved from https://en.wikipedia.org/wiki/Joy%27s_law_(management)

IARPA Neuroscience Programs (TRUSTSHARP)

IARPA Superforecasting Programs (ACE)

IARPA Cybersecurity Programs (CAUSE)

IARPA  AI Programs (TrojAI)

IARPA Biotechnology Programs (FELIX)

Referenced IARPA Funding Mechanisms Seedlings

Previous Prize Challenges

IARPA Research Technology Protection 

ODNI 100/500 Day Plans

Enjoy this preview of IARPA: Disbelief to Doubt Episode 2! In this retrospective episode, you'll hear former IARPA Director Dr. Jason Matheny talk about what it means to be a force multiplier as an IARPA program manager and discuss lessons learned over a decade of IARPA research breakthroughs. 

Part 1 drops on November 4th followed by Part 2 on November 18th.

Timestamp Caption
00:00:00
Jason Matheny: Well, I think the thing that most attracted me to the program manager role at IARPA was that there were problems that I as an individual researcher felt like I couldn't solve. I wasn't smart enough and I didn't have a way of multiplying myself and my hours such that I could accomplish what I wanted to.

So coming to IARPA as a program manager, the attraction was that I could multiply my research effort by funding a hundred scientists and engineers who were smarter than me, who could spend more time working on this problem than I could alone and solve problems that I wouldn't have been able to.

[00:00:00]

In Part 1 of the premiere episode of the Intelligence Advanced Research Projects Activity's (IARPA) new podcast series, we sit down with former Director Catherine Marsh to discuss IARPA's origin story and the world events that led to her joining government service and set her on a path to leading what the New York Times has heralded as "one of the government's most creative agencies." 

Timestamp Caption
00:00:00
Disbelief To Doubt Podcast

Episode 1 Part 1: Leap of Faith

Guest: Dr. Catherine Marsh

Dimitrios Donavos: IARPA sponsors research that tackles the intelligence community's most difficult challenges and pushes the boundaries of science. We start with ideas that often seem impossible and work to transform them from a state of disbelief to a state of just enough healthy skepticism or doubt that by bringing together the best and brightest minds, we can redefine what's possible. This podcast will explore the history and accomplishments of IARPA through the lens of some of its most impactful programs and the thought leaders behind them. This is IARPA, Disbelief to Doubt.

Dimitrios Donavos: In the premiere of the new podcast series from the Intelligence Advanced Research Projects activity, IARPA, we sit down with outgoing director Dr. Catherine Marsh to discuss IARPA's origin story and the world events that led to her joining government service and set her on a path to leading what the New York Times has heralded as one of the government's most creative agencies. Take a listen…

Dimitrios Donavos: Hello and welcome to IARPA Disbelief to Doubt. I’m your host, Dimitrios Donavos, and today we welcome Dr. Catherine Marsh, Director of IARPA.

Dimitrios Donavos: Dr. Marsh, thank you for joining us today and helping us kick off Disbelief to Doubt.

Catherine Marsh: I love it, just absolutely love it.

Dimitrios Donavos: We are speaking to you today in your role as the director of IARPA, but you have had a long and distinguished career prior to your tenure here, both in government and industry, including leading the team that put lithium-ion technology on NASA's Mars Exploration Rover's Spirit and Opportunity. You hold bachelor's and doctorate degrees from Brown University in inorganic and analytic chemistry and had to overcome a disadvantaged background and the challenges of being a woman in a traditionally male-dominated scientific field. Take us back to the beginning and walk us through what sparked your desire to pursue a career in science and how you ultimately ended up leading battery development for the Spirit and Opportunity Mars Rovers.

Catherine Marsh: Sure, thank you. Welcome the opportunity. It really, it started with landing on the moon. When I was a kid, right? I remember, I was, you know, in 1969, when we landed on the moon, I was there in the side room of our house with my mom and my dad and my brothers. And we watched Neil Armstrong walk on the moon. And-

That was a miracle to me as a little kid. And my parents were so excited as we were watching this. And yet it's like, how is it we did that, right? And it to me was just so magical that we could do something like that. And my dad was an engineer, and my mom actually was a scientist, but women weren't allowed to be scientists back in the 50s. And so, she became a schoolteacher.

And that's not a bad thing. But they always encouraged us to be curious and to ask questions. And that really was the start is back then. And then, you know, as I was growing up and my parents were both having to work to be able to make ends meet, I had to cook dinner, right. And so to learn how to cook, you have to figure out how things work in the kitchen right? And it's not always so obvious.

I burnt a lot of things, but I also learned to be a really good baker, and baking and chemistry go hand in hand. So when I went to school, I just had the good fortune of being really curious, which lent itself to studying the sciences, at least in the public schools that I went through. And when I was in high school, I really, really, really wanted to go to Brown University, and I said to everybody that that's where I'm going to go from the time I was 13 years old. And they had a seven-year medical program that I wanted to get into. And when I was 17, I applied to college, and I was accepted into the school, but not into the program I wanted to get into. And when I was studying in your freshman year, I was studying, you know, chemistry and bio and biology was this great big huge class and everybody was, you know, kind of backstabbing because they all wanted to get into medical school and I was just really thriving in chemistry and I decided that that's where I was going to pursue my education and when I finished college, I was the only woman in my graduating class in chemistry.

But that's okay, you just make it happen. And then it was in the early 80s and I said, there were really no jobs. And I ended up going to grad school and majoring in inorganic and analytical chemistry. So I actually studied, how do you make cold water bleaches work? And then you're like, how does that translate into batteries? It doesn't, okay. But rates of reactions are rates of reactions and how do you make things work with different kinds of catalysts and make them work well. And so I did some of that fundamental early work to make cold water bleaches work much more effectively.

And then when I finished, I had the good fortune of getting hired by the Naval Undersea Systems Command at the time in Newport, Rhode Island to work in the lab and to be a chemist in the lab. And I was somebody... took a chance on me as a curious young scientist and Jim Modin was his name and he was my mentor until he passed away a number of years ago and he just took me under his wing. I didn't know electrochemistry other than the class I had taken in college and I was taught how to think critically when I was in college and grad school and applying that to the problem that we had at the time, which is how do you make, quite frankly, torpedoes, work quietly at depths and at speeds that were necessary and doing that with next generation technology.

So I've always been in that field where we're exploring what comes next, what can you do and how do you get it there. And so that wasn't lithium-ion technology, but it certainly was the development of battery technology for a wide variety of naval applications. And the naval applications are some of the most challenging in the world because of the undersea environment and the excess corrosion that you have to worry about due to dissimilar metals and things like that. People forget how challenging an environment that's working in. So it makes you think about all the fringe cases, all those out of the box, not so traditional ways of having to do things.

And when I left the Navy and ultimately ended up working in the private sector at Yardney Technical Products, it was the early days of lithium-ion technology. And we were one of the successful developers of lithium-ion technology. And we were competitively selected by the Air Force to do some development of lithium-ion technology for some Air Force applications. And through that, our team at Yardney Technical Products grew some really great technical expertise that we were then ultimately selected to be part of the development team for the lithium-ion batteries for NASA's Spirit and Opportunity. So we had a number of other successful military and DOD applications that the technology was used on before we were a part of the team that got to do Spirit and Opportunity. So we were really fortunate.

Dimitrios Donavos: You mentioned that you were one of the only women to graduate with your degree at the time, at least in your program. How do you feel that challenged you moving forward as you sort of were exploring your career and thinking about where your next step would be, especially after graduate school?

Catherine Marsh: It's a, it's an interesting question. I'm not sure I, I understood it to be the disadvantage, a disadvantage because I had such great mentors and people who gave me opportunities and to, and helped me along the way. I never viewed it as I, I was a Singleton. I often went to conferences and I was the only one there.

But what it made me do was really be an expert, such that when I was part of the conversation, I wasn't doubted or I wasn't second guessed, but I had to be that much better. And I continue to challenge myself that way, is that if I don't know something, I'm going to explore and study and make sure that I understand those nuances and get smart on it so that when I am part of the conversation, I really am speaking from a technically strong position. And that's how I have continued to, I think, excel. And that way, I'm not getting undergoing any second guessing, because I really have got the answers to the questions that are being posed. I've never and maybe it's because I've had such great mentors, I would not undersell that at all. It's also made me a good mentor to many other people because I know the power of that to help them grow and develop as young professionals. You need somebody who you can brainstorm with and talk through and somebody who's encouraging you to take that next step.

When I worked for the Navy...we had this internal Navy program called IRIED, internal research and internal exploratory development, and my mentor wanted me to write proposals to that, and I was funded. And so the positive reward for what you're doing helps you continue to grow and build that confidence in my technical growth and technical development. I never said...I never said to myself, I can't do that because...

Dimitrios Donavos: You mentioned Jim Modin as a mentor, and I'm curious, did you have any female mentors that impacted your life professionally, personally, or if there are other mentors that you wanted to reference?

Catherine Marsh: I would say that over my career, Stephanie O'Sullivan, I met Stephanie the first day that I EOD'd, entered on duty with the CIA. She happened to be, not the first day, but the first day I was in Power Sources Center. She was in there meeting with George Methley and I got to meet the future Deputy Director of National Intelligence and the person who became the head of the Directorate of Science and Technology at the agency. And so I always, I had her to look up to as somebody who, wow, she had quite, she was, when I came on board to the agency, she was already very senior and only went up from there and it just so happens that the programs that I was working on and with were those that she had an interest in. And so I had the opportunity to actually brief her, well prepared to brief her. You never brief somebody if you're not really well prepared. I think that's important. On the technical progress of my programs over time and got really, really good feedback and interest in what we were doing. So I would say she's been one of the great mentees that I've had. Mentors, excuse me that I've had as a female in my career.

Dimitrios Donavos: Speaking of mentoring and challenges, the Mars Rover's exceeded their original 90-day mission life by nearly 15 years and are credited with major scientific discoveries, including evidence of past liquid water on Mars. Not many people can say their technology enabled exploration and scientific discovery on another planet. What was the most challenging thing you had to overcome to help enable that mission? And what did you learn about yourself in the process?

Catherine Marsh: So along the way, you know, nothing is ever a straight line, right? And while we were building, you know, you go through a series of experimental designs to get to the point where you are able to successfully deliver product. And while we were, prior to the missions for Spirit and Opportunity we were working on the Mars Lander mission that never went, right.

So the technology that was developed for that was resized to go, what we learned on that mission was the enabler to the Spirit and Opportunity. And when we were doing those batteries, it just so happens they're prismatic batteries, so they stack up like a deck of cards. And so, we, when you do experiments, you test but verify. And when you're going to do a space launch, you have to do what we call shake and bake tests. So these are the tests where you put things, put your test articles like you're going to put them in the vehicle and you're going to put them on shaker tables and you're going to put them through the experience of what's going to happen when they go and they're launched, right? And so that kind of thrust and the shake and bake. And so in...

We did that for the Mars lander batteries and one of my engineers was carrying that battery, which was shaped like a triangle, back to the lab after we did the tests, we thought quite successfully, and while he was doing that it exploded like a fire-breathing dragon. Fortunately, through the vent valve, which is what it's there for,

And he, it was pointed away from him and nobody was hurt. Knock on wood, nobody was hurt. But we were like, what just happened, right? And so that forced us to go back and do a failure analysis. And I was not part of that team. It was others within the company who do that failure analysis. So you've got that objectivity of people working on other design lines to come in and look through the fabrication of the batteries. And what we found is that we had a manufacturing flaw in one of the presses that was cutting out the electrodes, right. And so what was happening is there was a sharp edge. And the sharp edges, when you build the battery, you test to see if they're short.

And the sharp edges were being mitigatedin extra separator, which was creating a stack that was too big for where it was within the battery. Bottom line is design matters, safety matters, and paying attention to all the details really matter. And so we fixed the design, we fixed the batteries, we have repeated the shake and bake tests and everything became successful when you built in terms of the prints. But it's really making sure that you're crossing the Ts and dotting the Is and paying attention to the details. And for the spirit and opportunity, we knew that that was what's called a Class A must not fail mission. When you've got a Class A must not fail mission, you have to guarantee that it's gonna be successful.

And so that battery was really over designed and the battery case for those are made out of titanium. And most of the titanium was on the floor of the mill of where they made it because it was a solid piece and not pieces that were put together because you're spending, you wanna ensure the integrity of the design. And so all of that translates into working on technology and taking risks, calculated risks, but understanding the risks that you're doing so that when you stand behind something and you say it's gonna work, you know it's gonna work. And so you've done those experiments, you have asked and answered those questions, and the unanticipated questions. I think that's what I learned most about that is we didn't ask all the right questions and we needed to be more rigorous about that. And so that again notched up the level of questions that I need to ask and to be make sure that I have answers to. And that was a learning experience for all of us actually.

Dimitrios Donavos: I can only imagine what the process would be like to think through unanticipated questions on a planet that we haven't fully understood or explored. And so I can imagine that was quite a challenge.

Catherine Marsh: Well, and imagine, you know, during part of the experiments where they were operating on the planets, minus 54 degrees. That's really cold, people. You know, and we see it right now, actually, very recently around here in the greater Washington, D.C. area.

It was really cold last month. If you remember, we had snow for the first time that we've had in a while. And a lot of people who have electric cars were complaining because their batteries weren't charging up correctly and they didn't have the range that they typically have. Now, why is that? It's because it's cold outside. The environment matters. And so the batteries that were successful on the planet Mars have a very special electrolyte in them that the ones that are operating down here don't because it's it's a much colder planet and you have to worry about those details and we had to study and explore different kinds of electrolyte combinations to make that work in those very very cold environments. And so I think that was as I was talking to neighbors and friends during that period when they couldn't get the rate everybody's asking how come my batteries won't work? I was telling them that's not the way they were designed, right? So, yeah.

Dimitrios Donavos: Well, I can speak to that because I own an electric vehicle and I also had difficulty in that time period. I didn't think, though, that the best way to approach that was to make sure my designers were thinking about Mars. I recall reading at the time that engineers and scientists working on that program had synchronized their lives around Martian time. Is that something that you did as well?

Catherine Marsh: No, I didn't because, but I will say because once, once we delivered the batteries and they went across the country and they got integrated into the vehicles by the NASA teams we were we were separated from it for quite some time before they actually were doing the flight operations and I guess kind of just on a side note. I didn't get to go to the launch because I by that time, I was already working for the agency here, and I wasn't in the day-to-day operations. I would have loved to. I've never been to a launch, you know?

Dimitrios Donavos: I can imagine though that you had a sense of pride watching the rovers do their thing for as long as they did, far exceeding expectations.

Catherine Marsh: Absolutely. In fact, my daughter got me a piece of artwork that's actually the trek that they took across Mars. And so that's the planet of Mars and the trek that they were taken and it's a signed piece of art from the originator, which I hang proudly in my office at home.

Dimitrios Donavos: You just referenced working for the agency, which segues perfectly into my next question, which is how did it happen that you ended up working for the government? What led you to that service?

Catherine Marsh: so, during the development of Spirit and Opportunity, remember I mentioned it was a class A must not fail mission. And so NASA tapped into the Power Sources Center at the Central Intelligence Agency, which is a center of excellence in battery technology. And they have been, they're a small but mighty group of really expert technologists. And George Methlie was one of our technical reviewers and he said to me, you know, if you're ever interested in coming to work for the agency, let me know. And so at the end of what we were doing for the Rovers, I said, you know, I think that would be cool, you know, to be able to take this and to give back to the government. Remember, I started working my career at the Navy and now is in the private sector. And I said, I, I don't know.

And so I applied to the agency and I was in process and then 9-11 happened. Right. The morning after 9-11 I got my call and to join the CIA. I wasn't going to say no. I couldn't have anyway. And everybody asked me, what are you going to do? And I said, I have no idea. It was a leap of faith to come to work for the agency and I have not one regret.

I'm a mid-career hire who came and working at the agency on some of the hardest technical challenges I've ever been faced with. And I can't tell anybody what I do about them in detail. And yet it's really important work and it's really rewarding work that, you know, we may do a development of something that we only use once and never use again. That's really important stuff.

And people, it's not something that we can talk about or something that we advertise because we don't want to let the world know that that's what we're able to do.

Dimitrios Donavos: Nine eleven was obviously an inflection point for our country, and your history with joining government service is intertwined with IARPA's origin story. As our listeners may or may not know, IARPA was established in the wake of those terrorist attacks on September 11th, 2001. In response to the intelligence failures, both the 100 and 500-day plans issued by the Office of the Director of National Intelligence, called for the creation of IARPA. Can you walk us through the gaps that the 9-11 Commission identified in their report and why they felt they needed to create IARPA to address them?

Catherine Marsh: The gaps that they identified, I don't feel confident to specifically address, I'll be honest with you, but what they did say is that they wanted the intelligence community to have what the Department of Defense had had for so long and that ability to understand unwarned access, to have technologies that people don't know that you've got, to give them that new set of tools and to not be stuck in yesterday's technology capabilities. And for leading up to 9/11, there had been a reduction across the government in the technology and where we were going. And a lot of the capabilities that we were using were quite old. And that's just a fact. And we hadn't hired in a long time. And so they did. The intelligence community went out and hired a lot of folks to make sure we understood what happened. And coming in as a scientist and a technologist, I was brought on to help develop next generation capabilities. And IARPA at that same time was stood up to bring to the IC that unwarned those new capabilities, that next generation technology, but not tied to just one mission. We're not...

We were put at the office of the Director of National Intelligence because we're here to service the community. We're here to make sure we understand the challenges and gaps, not just at CIA or DIA or FBI or NSA, but across the community and say, what are the most important technical capabilities that we can bring to bear that give multiple agencies and new capability that they didn't have before, right. You know, when you're worried about... worried about doing communications securely, that impacts not just one agency, right? You can imagine how even police officers need to have secure communications, as well as FBI agents and spies and, you know, military officers in the fields. And so when you sit at the Office of the Director of National Intelligence, you're worried about all the agencies and where those gaps and capabilities are. And so we bring together scientists and engineers who, once we know and identify what those gaps are, are going to take those challenges and try to create next generation technology that addresses those challenges. We don't have requirements here.

We have challenges. And so having a challenge is actually much more exciting kind of thing to do than to have a requirement. I'm not going out and buying a water bottle. I'm going out and saying I need some sort of a vessel that's going to contain some sort of a liquid. What can you do to do that for me? And so, and then how am I going to put that safely out in an environment that might have to go into space, or it might have to go into a desert, or it might have to go into, you know, frozen territory?

How do you address that spectrum of challenges? And so with the startup of IARPA, we were brought on board to, no kidding, do that high risk, high payoff, next generation capability that is really well vetted. If you remember when I was talking to you earlier about the risks and the making sure we understand all the questions.

A full 25% of IARPA's budget. So for every $4, one is spent on test and evaluation, goes to doing independent test and evaluation. So all that new technology that we're developing, we make sure that we test it in a wide variety of applications so that when we're transitioning it to our intelligence community partners, that we know that it's gonna work.

the way it's supposed to work, and that if there's problems with it, that we have learned from it. And even for those technologies that don't work, we document those lessons learned so nobody else is making investments in technology that has failed. And that we pass that knowledge on too, because that's just as important, right? In the landscape ofpushing to capabilities, making sure people know what things we've looked at that didn't work. We don't want to hide that. We want to transmit that as well because it just means that in accordance with the Heilmeier Catechism, it's not ready for technology today, but that doesn't mean it will never be ready.

End Part 1 of 1

[00:00:00]

M. C. Smart et al., "Performance characteristics of Yardney lithium-ion cells for the Mars 2001 Lander application," Collection of Technical Papers. 35th Intersociety Energy Conversion Engineering Conference and Exhibit (IECEC) (Cat. No.00CH37022), Las Vegas, NV, USA, 2000, pp. 629-637 vol.1, doi: 10.1109/IECEC.2000.870847

R. Gitzendanner, T. Kelly, C. Marsh, and P. Russell, "Prismatic 20 Ampere Hour Lithium-ion Batteries for Aerospace Applications," SAE Technical Paper 981249, 1998, doi: 10.4271/981249.

J. C. Flynn and C. Marsh, "Development and experimental results of continuous coating technology for lithium-ion electrodes," Thirteenth Annual Battery Conference on Applications and Advances. Proceedings of the Conference, Long Beach, CA, USA, 1998, pp. 81-84, doi: 10.1109/BCAA.1998.653845

J. C. Flynn and C. Marsh, "Development of continuous coating technology for lithium-ion electrodes," IECEC-97 Proceedings of the Thirty-Second Intersociety Energy Conversion Engineering Conference (Cat. No.97CH6203), Honolulu, HI, USA, 1997, pp. 46-51 vol.1, doi: 10.1109/IECEC.1997.659157.

Mars Spirit and Opportunity

 

ODNI 100/500 Day Plans

 

IARPA Quantum Programs (ELQ, CSQ, QCS, MQCO, LogiQ, QEO)

IARPA Facial Recognition (Janus, BRIAR)

IARPA HLT (HIATUS, BENGAL, Babel, MATERIAL, BETTER)

 

IARPA Programs Quantum Supremacy

IARPA Funded Research Nobel Prize 2012

In Part 2 of our premiere episode of the Intelligence Advanced Research Projects Activity's (IARPA) new podcast series, we continue our conversation with former Director Catherine Marsh to discuss IARPA's process for funding high-risk/high reward research, what differentiates IARPA's mission from that of DARPA's, how Program Managers are empowered to make a dent in the universe, and much more. 

Timestamp Caption
00:00:00
Disbelief To Doubt Podcast

Episode 1 Part 2: Force Multipliers

Guest: Dr. Catherine Marsh

Dimitrios Donavos

IARPA sponsors research that tackles the intelligence community's most difficult challenges and pushes the boundaries of science. We start with ideas that often seem impossible and work to transform them from a state of disbelief to a state of just enough healthy skepticism or doubt that by bringing together the best and brightest minds, we can redefine what's possible. This podcast will explore the history and accomplishments of IARPA through the lens of some of its most impactful programs and the thought leaders behind them. This is IARPA, Disbelief to Doubt.

Welcome back to IARPA Disbelief to Doubt. In part two of our two -part series speaking with outgoing director Catherine Marsh, we discuss IARPA's process for funding high -risk, high -reward research, what differentiates IARPA's mission from that of DARPA, how program managers are empowered to make a dent in the universe, and much more. Take a listen.

Dimitrios Donavos

IARPA often gets compared to DARPA which for our listeners who may not know, was established in 1958 to avoid technological surprise in response to the Russian launch of Sputnik, the first artificial earth satellite. You've already touched on this, but can you describe how IARPA's mission is similar but also different from DARPA's?

Catherine Marsh

So, DARPA is targeted at defense. And so, they are really going after the warfighter and enabling the warfighter with tools and capabilities. Whereas our mission is the intelligence community. And the intelligence community, for those of you who don't know, is very tiny compared to the Department of Defense. We're a lot of different missions, okay, but we're really a very small part of the overall community. And we have a very separate mission, complementary to some parts of the Department of Defense, okay, but...

Catherine Marsh

many parts the Department of Defense has no part of. So they're war fighters. We're hoping to prevent you from ever having to go to the war, right? So in the intelligence community, we actually all have only one mission, and that is to put actionable intelligence on policymakers' desks. And so when we're doing that, we are hopefully making it such that we prevent that next 9 -11 from happening. We prevent that next war from happening. We prevent those threats from coming into us and we can be preemptive of that. And so, most of what we do is behind the scenes and it's not published. If we're published, that's not usually a good thing for the intelligence community.

It doesn't mean we don't have great things to talk about. We could talk about all of our technology in many ways. Certainly at IARPA, we have a very outward mission, outward facing mission, and we can do that. But Department of Defense has got a much bigger mission, and they are also a much larger organization that takes their technology to a much higher level of technology readiness than what we can do at IARPA.

At IARPA, we stay at a technology readiness level that is around TRL level four, possibly five. That is, we have taken disbelief to doubt. We have proven with prototype capability. And then we transition into our IC partners who are going to take it the rest of the way into those often-classified use cases and scenarios that keeps it behind closed doors as to how that's ultimately, that technology is ultimately going to be used and deployed. Whereas at DARPA, they're going to take it all the way up to and make those first-generation capabilities. And then they could actually almost go to print, not completely, but so they have a larger...

Catherine Marsh

capability to do that, and they're serving a much larger part of the threat scenario, the services, if you will.

Dimitrios Donavos

So, the mission requires IARPA to constantly be looking forward, and that requires developing and executing research that pushes the bounds of science. That means we have to accept a significant risk of failure. Can you talk to us about what makes the problems that IARPA takes on hard and describe how IARPA minimizes risk and approaches failure?

Catherine Marsh

So, in the intelligence community writ large, we have to knowingly take on risk. I mean, if we're not taking on risk, we're not doing our mission. And in high-risk high-payoff research, what we do is we, as I mentioned, we go out and we do challenges in the sense of we go out with a research challenge in our in the use of the broad agency announcements. So, we try to do full and open competition to the maximum extent possible. And so, when we go out with the problem set, we are not going out with an idea that we're going to do X. We're going out with what are the greatest ideas to achieve a next generation antenna.

If we want to be 10 times greater than the Chu limit, or if we want 50 % more energy density in next generation battery technology, how are we going to get there? We're not going out with a preconceived idea. That's the challenge. Come and give us your great proposals against that. And the way we minimize the risk is twofold. First, we all have metrics. Everything has to have a metric.

A metric is the data that we are looking for that says, this is not an emotional decision. This is, I need energy density that's going to achieve X watt hours per kilogram or watt hours per liter. And if I can't meet X, then guys, that technical approach isn't going to work. And so with those metrics, we evaluate the technical ideas and we always fund

Catherine Marsh

more than one technical idea against that problem that we're going to try to solve. And every six months, sometimes a year, we measure the progress against those metrics by those independent tested evaluation labs, as well as the performers. You can submit your own results, of course, but we do that independent testing. And we will continue with those various technical approaches that are meeting those metrics.

But if they're not meeting those metrics, we terminate that technical approach. And so we buy down risk by looking at those various technical approaches that give us the highest probability of success in that area. One of the things that we have been doing for the past several years is because the pace of technology evolution is so fast, if we have multiple technical approaches that are continuing to meet the metrics, we're continuing all of them through the end of the program. The reason to do that is that then we can transition multiple capabilities to our partners across the IC, and then they have options for what they can do because unfortunately, many places in the world, they don't like us in the US. And if they find...

our tradecraft or they find a capability that we have, the bad guys share it with one another. And so, if we've got multiple solutions and we find one, they haven't found those other ones that still have the capabilities. So we're able to fill the queue up with solutions, hopefully, not always. Sometimes we really do down select to just one. And quite honestly, sometimes the programs fail completely.

You know, one of the recent programs that we were doing, MIST, which was a really innovative way of thinking about storage of information because the intelligence community collects so much information that how do you store it without, you know, billions of acres of computers is if we could use DNA to store information, think about that.

Catherine Marsh

how tiny we could make that and how we could do very interesting next generation communications. But the reality is, even though we tried, at the end of 42 months, which should have been 12, nobody met the metrics. And they were way far off. And when we looked across the landscape, and we will go back to Heilmeier III, what's changed or why do you think you can be successful at this time?

The technology isn't there yet and the landscape hasn't changed enough. It's something we'll continue to stay smart about. And so by taking different technical approaches, you are still taking on risk. Absolutely. But you're buying down that risk with a variety of potential approaches.

Dimitrios Donavos

Dr. Marsh, as you know, science is dealing with the reproducibility crisis right now. And one of the things that really differentiates IARPA's approach is this requirement for transparency that happens through this independent test and evaluation process. You've touched on this, but I just want to make sure that we reiterate and foot stomp. Test and evaluation takes up a significant portion of a program budget. Talk about the scale of that relative to the size of the budget for a program.

Catherine Marsh

So 25%. So for of our budget is independent T &E. And we do that, you know, it's rare that we'll just have one test and evaluation partner on a program. We often have two or three because we want to make sure that we can test all the different, one, technical areas, but also with, no kidding, this nation's experts in that field, right? And so we're using the national labs, we're using the federally funded research and development corporations. We're using the government laboratories that have that very specialized technical expertise. And we're using the university affiliated research centers to do that testing on our behalf. And they help set up the testing. They have the experts in the field. And they're the ones who give us the hard truth about what the results really are.

Catherine Marsh

They also then allow us to be part of the testing if we want to, to monitor, sit in, watch, so that the program managers are often intimately involved with the independent testing evaluation to see what the results are that are coming off near real time, because that program manager is a technical expert in the field. For example, next week we're going to be doing some testing on the the SCISRS program, which is looking at RF emanations, and the people who are the program managers are experts in that area. And so they're going to want to see what it is the testing is telling us about those emanations and what that might mean for the use cases and things that we have to worry about.

Dimitrios Donavos

Science and the scientific community doesn't always incentivize publishing results where we have failure. And so, we oftentimes sort of sweep it under the rug, but it can be extremely informative because there are lessons learned. What are some lessons learned from a failure you experienced in your career?

Catherine Marsh

When things don't work the way that you want them to, and we all have lots of things that don't work the way we want to, it's kind of go back to the drawing board. What did I, not from a blame perspective. What question didn't I ask up front that I should have asked, right? What are these results telling me? And so I was working on the development of a new battery technology. It happened to have been a battery called, it was a lithium seawater battery. So, imagine taking lithium, which reacts with water, okay, put lithium water, it catches on fire. And I'm here, I'm trying to make a battery out of it, right? And so, we had all these different, because you think out of the box and you're trying to do something different, we had all these different polymer technical approaches that we were looking at. And, you know, none of them worked, right? And so, I'm like, why didn't they work? Right? Because if you think about polymer, and you think about ladies pantyhose, you know, they stretch and they compress.

Catherine Marsh

But there's little holes in them. And so, we could never make it, if you want to make a battery work, you've got to have electrolyte transfer. We could never make that situation work because we always had a pore that was open and so there was always a corrosion reaction going on. And the corrosion reaction was more dominant than the electrochemical reaction. So, we did get electrochemical reactions that worked. But ultimately in the development of that systems, those systems, we hit a wall. And when we hit the wall, we're like, why did we hit this wall? And we had this aha moment of the polymers were never made to be able to do that. And so, at the same time, we were, you know, I like to stay technically sharp and we were working on monitoring progress on other approaches for new separators, if you will, for batteries and there was this one that was a conductive glass that was being developed by a small company out in California.

And I said, you know what, why don't we see if we can make this work? And sure enough, we, for a very small amount of money, we funded this company and if they could, they made it work, right? And so instead of a lithium seawater battery that was based on a polymer technology, we ended up working on technology that was a conductive class, that we made it meet the metrics that we were going after to continue. And so, I guess part of it is, I don't like to give up, right? So never give up is behind some of that, but it was based on learning and people publishing. And when you hit that wall, not saying, not burying those results and...

because most of that work was on a classified effort, you know, they're not published out in open literature, although all of those people who were investigators did publish the work that they were doing, but not for our particular applications, but internally at the agency, we documented all of that in the appropriate way so that the customer for that work ultimately

Catherine Marsh

had those results and had that knowledge base so that they didn't continue to invest incorrectly in something that wasn't going to yield the kinds of results that they need.

Dimitrios Donavos

You brought up an interesting point about publishing in the open research community. The majority of the work that IARPA funds is actually published in the open research community and our work is for the most part unclassified. But that brings up a very important question. How do we protect the outcomes of our research from ending up in the hands of our adversaries?

Catherine Marsh

Great question.

One of the most important things we do for the development of our technology is the use of our research and technology protection protocols. That is a methodology that was developed at IARPA right at the very beginning. And that is a, it seems simplistic, but it's not. When you work in the classified world, we have a tendency to over classify. And so,how do we prove to our IC customers before we go out and we do this outward facing research that we're taking their problems seriously? And so, we developed this methodology that says, we're going to ask this series of questions of what are you trying to do? Why do you need to protect it? Who do you need to protect it from? Why do you need to protect it from them?

So that we could define crisp lines in the road where a program goes from unclassified to for official use only, to secret, to top secret, to compartmented information. And we review that the security profile or protocol for every program every six months to make sure that the landscape hasn't changed and that we are paying attention to the changes that may have an impact on our programs. The perfect example of how the landscape has changed is in quantum computing. Fifteen years ago, everybody was very concerned appropriately about qubits. And so, the classification and the protocols and the technologies that we're evaluating are very much focused on qubit methodologies. And yet, over time,

Catherine Marsh

in the recent past, photons and photonics have now emerged as a change to that landscape that now we need to pay attention to and we weren't paying attention to it because we didn't think that was a threat then. And so, if you're not doing this constant review of the landscape of what's happening, you can miss something like that and then you could have that unworn surprise again. And so that's why the use of our research and technology protection protocols, which we gladly share across the community and to the public sector who needs to know about it, to help educate researchers who aren't inclined to think that way, why you might want to add some protection in there and why you might want to think about where this technology is ultimately going to be used if you happen to be doing something that may be a use case of the intelligence community or the Department of Defense because they too have to protect how technology is going to be.

Dimitrios Donavos

I want to move on to discuss the engines of innovation at IARPA and that is the program managers. And we talk about program managers at IARPA as being force multipliers for any future program managers who might be listening, can you unpack what that phrase means to you and describe how the role of PM empowers them to transform not only science, but potentially national security and society?

Catherine Marsh

I gotta say, if I were a younger person at a different point in my career, I would love to be an IARPA or DARPA program manager, okay? Because it is that opportunity to come in with that great idea, okay? Because program managers have to have a great idea that they're really passionate about that's going to change the way things are done in the community and come in with that great idea, pitch it, get it across the goal line. And if they do that, then they're gonna have an opportunity to come in and run that program and to be that guiding light

Catherine Marsh

of that next generation capability. You know, if you think about some of our programs, you know, we're working on the development of our program called SMART ePANTS, right? And SMART ePANTS is developing next generation clothing capability. Why would you need that? Well, we go places where we might want to see things, hear things, and geolocate where they are. And...

We don't always think about all the dual use cases of it. When we were talking about, because we do things very publicly on our programs, when we were talking about that program, we actually had a parent of a child with learning disabilities contact us and share with us how critically important such a development would be for their child. Because...

Their child would then be always protected from those cases where they're in a situation where they don't know and they may be in the harm's way of somebody. We were so touched by that use case of not even being on our radar that somebody would share with us. And so that was something that gave me goosebumps when I heard about it. And, you know the next generation battery technology, if we're able to increase the energy density of the materials, the fundamental materials that we're doing for the intelligence community, that carries over to how much range you're going to get in your electric vehicle. It carries over into the electric grid. It carries it over into, you know, one of our upcoming programs is going to be on solar panels. And so, there's dual use capabilities that if we didn't have these specialized cases in the intelligence community that make it so worth our while to make that investment, and we're not moving the needle with small programs, this is, I can't talk about the specifics of our budget.

Catherine Marsh

But this is investment that moves the needle substantially on the technology. And then that dual use capability means that the greater public sector benefits as a result of it.

Dimitrios Donavos

You touched on a point I was about to make that PMs come to IARPA really to make a dent in the universe. And it's that ability to work on these very challenging problems and moving the needle. And that leads into my next question.

Part of IARPA's approach to solving difficult problems is building multidisciplinary communities that often didn't exist before. IARPA has a history of launching programs that have lasting impact, not just from the perspective of improving national security capabilities, but in terms of advancing science in areas from quantum computing, which you touched on, to human language technology, to even super forecasting.

How have IARPA programs moved the needle in the open scientific community in ways our listeners might be surprised to learn about?

Catherine Marsh

Wow, what a great question. There's thousands and thousands of publications attributable to IARPA research across the world and many more thousands who reference our work in their work to go forward. But we also have Nobel Prize winners as a result of the quantum research that has its foundations here. We have Bell Prize winners. We are the foundations for next generation facial recognition capability. When you go onto Google and they're asking you these questions about what's making, you know, recognize all the motorcycles in these things, all of that traces back to technology developed at IARPA for a totally different application that then gets used in the private sector. And you know, all of this diversity is a result of the fact that all of us, me included, we're only here for three to five years. We don't have a lifetime to make a difference. We've got to come in with that idea, that passion, and that drive to make it happen and make it happen now. We can't afford to...

Catherine Marsh

drag things out and only do incremental advances, if you don't have something that's gonna be game changing, you're not gonna get across the line. And we've gotta take that risk. And there's others that will continue with that. But when you only have three to five years to get something done, you're off and you're running from day one. And if you're passionate about it, you're spending five of the busiest years of your life doing this while you are eating, drinking, sleeping, and thinking about it. And you are generally not just doing one program, but you're doing two or three programs over the life of your time here at IARPA.

And when you think about that human language translation, everybody says, I've got Google Translate. Well, yeah, that's okay if you're going to France, okay? But I just want to let you know that the intelligence community were not worried about going to France, okay, or Germany or Italy, all those wonderful places that are in this world, were really worried about going to those places where the bad guys are, where we don't understand Pashto or, you know, Tagali or some of the other low-resource languages. We have to have and enable those kinds of tools for our partners and to be able to do that, to understand the analysis, to be able to understand the threat landscape and all of that. And we didn't have those tools until we had someplace like IARPA that created that and have really made significant changes that have made wonderful impacts for our partners in the community.

Dimitrios Donavos

So you referenced term limits and your term is coming up soon. Thinking back to the first time you actually were at IARPA in your role as deputy director from 2013 to 2015, how would you say IARPA has evolved as an organization since that time and how have you evolved as a leader?

Catherine Marsh

Good question. When I was here before, it was the end of our...coming to the end of the first generation of technologies that we had invested in. And some of those were extremely well received, absolutely, particularly our human language translation tools and capabilities. But yet others that where we had made really what we thought were significant advances, they actually ended up getting put on the shelf. And I was part of...a very interesting conversation with one of our IC partners where we went down and we did a demonstration for them and they said, wow, that's really cool technology, but you know, we can't use it because if we really need to do what this technology has the ability to do, we've got to go in and worry about chain of custody and other things that are relative and important. And I was in that meeting and that was an aha for me in saying that we were not close enough in our early days to our IC partners to really understand and get the buy-in before we even started a program to what we needed to be to make sure that our technology was really going to be carried across the line and not be put on the shelf.

And so, one of the big changes that we've made since then across that is the involvement of our IC partners right from the beginning of program inception. In fact, on my watch, if you don't have IC partners sitting at the table at your new start pitch, okay, who say, yay Verily, we really want this, this is really important to us, and hopefully you've got more than one IC partner but at least one IC partner, then your program's gonna get a thumbs down because we can't afford to do things that are gonna be put on the shelf. And yes, some things are next generation, but even that program that I was telling you about, MIST, that was a failure, that didn't, wasn't successful, we had IC partners who were right there saying, if you can make this work, this is really a game changer for us.

Catherine Marsh

And so it was a smart investment because we knew if we could do it that we had somebody who was going to pick it up. I think that's the biggest success story in the huge changes that we've made because they become part of our government advisory panels that work throughout the programs and then pull through the technologies at the end. And I would say that that has enabled the success rate of close to 75 % of our programs have some level of transition to our IC partners. Even if it's a program that fails, if we've done data collection, labeling, data labeling, then, and we're able to transition those data sets, at least that is something that saves another IC partner funding and they can use it on their applications.

So, we look across that for many incremental ways of being able to transition not only the end product but things along the way. So we've got that buy-in and that familiarization to help get over the not invented here syndrome which we all suffer from. Well, I don't know anything about that and so now I've got to relearn how to do all this. If they've been working alongside you from the beginning and watching the progress and even coming to our test and evaluation events and being part of that, we've got much better pull through at the end. And I think that's the biggest evolution and the success that has enabled with that.

Dimitrios Donavos

I mean, it's natural, I think, for people to feel skeptical. And part of that process is enabling people to feel like they're part of that development and they have skin in the game. And so I think that has been a really incredibly important development at IARPA. And in my tenure here, I have witnessed that happen. As you reflect back on your term, and as we think about the speed with which innovation is accelerating, the intelligence community will likely face a landscape of technological challenges that will look different from the challenges of just a few years ago.

Dimitrios Donavos

Looking over the horizon, what advice would you offer your successor for leading IARPA through its next chapter?

Catherine Marsh

Make sure you understand where our partners are now and where they really see those gaps in capabilities that are causing them to stay up at night. Right? What can't they do? You know, where do they really see those big, big threats and why? Making sure that we are doing that. And I would say that while we have on many programs and more today than we had in the past, doing tested evaluation that mimics, and it's classified so we don't tell the details of what we're doing outside, but making sure we're doing the test and evaluation that carries the most credibility for our partners that helps them with that pull through. So, understand the gaps, make sure you've got the tie in, looking down at what's that big threat coming at us? Can we make computers so smart that we just heard a talk this morning where somebody said we're going to make our computers so smart that we won't need to educate people with PhDs. And I said to the program manager, I don't think that's such a great idea, you know?

But, you know, what is that threat of the artificial intelligence and machine learning and, you know, the chat GPTs and beyond that have bringing tools and capabilities the IC cannot put its head in the sand. How do we enable those next generation capabilities to be on the high side and to be safe on the high side and to understand the threats that that impose and then to knowingly take on the risks because we've done the hard work at IRPA in a safe way that enables those capabilities to be more readily adopted. And it's not an easy...

Catherine Marsh

road to hoe because you're always going after gaps and not everybody wants to tell you about them but part of my warm transition with my successor is going to be to make sure that he knows all of the correct people in the different agencies and can readily get access to that information and stay current on that because where do they see the threat coming on board? Even something as ubiquitous as climate change has a threat to the intelligence community. Even though that's climate change in the intelligence community, we are constantly monitoring that and being smart about it. And I think that and hiring great people who really are passionate that...

that makes all the difference. And the program managers are the best part of the day, right? When you get to spend 20 minutes, somebody pops into your office just to share what, hey, this great result. Hey, let me show you this cool thing. That's what keeps me going, right? You know, all the headaches of things that we have to deal with them as we have to.

But the next generation technology, when they're bringing that to you and they're showing it to you, is just, that's the really exciting part that will keep you enthusiastic and knowing that you're doing the right thing and that nobody else is.

Dimitrios Donavos

I can completely relate to that feeling having been at IARPA for nearly a decade. I can say that one of my favorite times is when we have the program management reviews and for our listeners, each program every six months goes through a review cycle where leadership will hear the newest developments on the program. And that's when we get to hear all of the exciting technologies and capabilities that are being developed. As we sort of wrap up here, Dr. Marsh, I want to ask you one final question. And that is, what is one thing you can share with our audience that people might be surprised to know about you?

Catherine Marsh

I guess my grandparents came through Ellis Island. So we are first generation. My mother is first generation. Both of my parents who grew up in the 50s, they went to college. My dad was a sailor. He was a Korean conflict vet. He died at 66 years old. So, the closer I get to that age, the more blessed I am that I have good health. And, you know, I look at that. I would say that I, looking back on life, I have been blessed with all the things that I've had the opportunity to do that I never knew that I would get to do. Right? I didn't plan a path, you know, I come from the wrong side of the tracks. I didn't grow up wealthy. I mean, to get to go to an Ivy League school when, when we were on food stamps when I was in junior high is a blessing, right?

And then to be able to look back at all that I've accomplished, because when you're 21, 22, and you're like, life's ahead of you, you have this blank slate, and everybody wants to have a plan. And I never really had a plan. I let the passion for what I was doing lead me, and it didn't, didn't take me down the wrong track. And I had somebody who once said to me, if you love what you're doing, you're never going to work a day in your life. And I never feel like I've worked a day in my life. And I've also never forgotten to say thank you. Thank you to, I didn't get here on my own. Everybody who I work with, I say thank you to all the time.

They have helped me be successful and I hope that I have helped them in some way. And I say this all the time and I'm known for it, but our assets go home at night. We've got to take care of your people. They're the most important part of what we do and all the technology that we're developed and that we continue to develop is as a result of the great people. And so we've got to make the investments in making sure.

Catherine Marsh

that they have the things they need to do the job the way they want to do it and make sure that they're empowered to do it. And I have had people help with that along my career. And I hope that I'm giving that back to others as they can be successful along their paths in their lives.

Dimitrios Donavos

Dr. Marsh, that was a very powerful and compelling answer to a question I'm sure you did not have a plan for when I asked you. Well, I just want to close with a more formal thank you. Thank you for joining us today. Thank you for a compelling and insightful interview, and we appreciate your time very much.

Catherine Marsh

I really enjoyed this, and I'm so thankful that you're going to do Disbelief to Doubt because that truly is what we're about in changing the landscape of what we can do for our nation.

Dimitrios Donavos

Thank you for joining us. For more information about IARPA and this podcast series, visit us at I-A-R-P-A.gov. You can also join the conversation by following us on LinkedIn and on Twitter at IARPA News.

End Part 2 of 2

[00:00:00]

M. C. Smart et al., "Performance characteristics of Yardney lithium-ion cells for the Mars 2001 Lander application," Collection of Technical Papers. 35th Intersociety Energy Conversion Engineering Conference and Exhibit (IECEC) (Cat. No.00CH37022), Las Vegas, NV, USA, 2000, pp. 629-637 vol.1, doi: 10.1109/IECEC.2000.870847

R. Gitzendanner, T. Kelly, C. Marsh, and P. Russell, "Prismatic 20 Ampere Hour Lithium-ion Batteries for Aerospace Applications," SAE Technical Paper 981249, 1998, doi: 10.4271/981249.

J. C. Flynn and C. Marsh, "Development and experimental results of continuous coating technology for lithium-ion electrodes," Thirteenth Annual Battery Conference on Applications and Advances. Proceedings of the Conference, Long Beach, CA, USA, 1998, pp. 81-84, doi: 10.1109/BCAA.1998.653845

J. C. Flynn and C. Marsh, "Development of continuous coating technology for lithium-ion electrodes," IECEC-97 Proceedings of the Thirty-Second Intersociety Energy Conversion Engineering Conference (Cat. No.97CH6203), Honolulu, HI, USA, 1997, pp. 46-51 vol.1, doi: 10.1109/IECEC.1997.659157.

Mars Spirit and Opportunity

 

ODNI 100/500 Day Plans

 

IARPA Quantum Programs (ELQ, CSQ, QCS, MQCO, LogiQ, QEO)

IARPA Facial Recognition (Janus, BRIAR)

IARPA HLT (HIATUS, BENGAL, Babel, MATERIAL, BETTER)

 

IARPA Programs Quantum Supremacy

IARPA Funded Research Nobel Prize 2012

Get a sneak peek of Disbelief to Doubt Episode 1! In this episode, you'll learn IARPA's origin story and hear about former Director Dr. Catherine Marsh's "leap of faith" into government service. Full episode coming soon! 

Timestamp Caption
00:00:02
Catherine Marsh - And then 9 -11 happened. The morning after 9 -11, I got my call to join the CIA. I wasn't going to say no. I couldn't have anyway. And everybody asked me, what are you going to do? And I said, I have no idea. It was a leap of faith to come to work for the agency, and I have.
00:00:25
Dimitrios Donavos - Join us for the premiere episode of IARPA's new podcast, Disbelief to Doubt, where we speak with former director Dr. Catherine Marsh about taking a leap of faith to join the CIA, working on Class A must -not -fail missions, and much more. Coming soon to A Podcast Player Near You.

[00:00:02] Catherine Marsh -

[00:00:25] Dimitrios Donavos -

Welcome to IARPA: Disbelief to Doubt the new podcast from the Intelligence Advanced Research Projects Activity (IARPA).   Disbelief to Doubt will explore our 15+ year history of pushing the boundaries of science and being at the forefront of innovation all in service to our nation.

Coming Soon!

Timestamp Caption
00:00:10
Dimitrios Donavos - Science and technology are rapidly transforming not only how we work and how we live, but how our nation conducts the business of intelligence. Launched in 2006 by the Office of the Director of National Intelligence, the Intelligence Advanced Research Project activity, IARPA, invests in high -risk, high -payoff research to tackle some of the intelligence community's most difficult challenges. Since its founding, IARPA has been at the forefront of innovation sponsoring research that has the potential to push the boundaries of science. This approach often means starting with an idea that seems impossible, then moving from this state of disbelief to a state of just enough skepticism or doubt that bringing together some of the best and brightest minds might allow us to redefine what's possible. This podcast series will explore the history and accomplishments of IARPA through the lens of some of its most impactful programs and the thought leaders behind them.

We'll use artificial intelligence to navigate to the moon and back on an unmanned autonomous vehicle, delve into the structure of the mouse brain to reverse engineer machine learning algorithms, unpack the science of predicting the future, and much more. This is IARPA, Disbelief to Doubt.

[00:00:10] Dimitrios Donavos -

Coming soon

Coming soon

IARPA: Disbelief to Doubt

IARPA: Disbelief to Doubt Logo

Loading...