A Conversation with Dr. Alondra Nelson
Dr. Nelson discusses equity and public access to science; AI governance and policy; race and technology; and the artistry of discovery.
The conversation is moderated by Melody Barnes, Executive Director of UVA's Karsh Institute of Democracy, and Yael Grushka-Cockayne, Professor of Business Administration at UVA's Darden School of Business.
JASON NABI: Good evening, everyone. I'm Jason Nabi. I'm the project manager for the UVA Futures Initiative. And on behalf of the Futures Initiative, I'd like to welcome you to a conversation with Dr. Alondra Nelson.
[CHEERING]
[SCATTERED APPLAUSE]
ALONDRA NELSON: Thank you, my two friends.
MELODY BARNES: It's the fan club.
JASON NABI: I have a bold prediction about the very near future. Tonight will certainly be one of the most pleasant and edifying Tuesday evenings for many weeks to come. Thanks, y'all. And well beyond that, of course, tonight's illuminating conversation is sure to inspire and sustain us for many futures to come.
A quick word about the Futures Initiative, and then we're on to it. We were launched in January under the auspices of the Provost's Office. We scan the higher education horizon in search of ways to proactively position UVA to thrive in a radically evolving world.
Toward that end, the members of the Futures Initiative working group, which is a task force made up of representatives from all 12 of UVA schools, several of its administrative and academic divisions, and four of its Pan-University Institutes, members of the working group have been asking far reaching questions about what UVA might do to achieve its strategic goals in striking and innovative ways. And in doing so, to become the University of the future.
Part of that process, through the Futures Initiative speaker series, involves bringing thought leaders from a variety of sectors to grounds to share their visions of the future in their respective fields. We are excited to continue this series with today's event in which we focus on a future that will require us to better harmonize the great of scientific and technological advances with the good of societal well-being.
For this, it is our great pleasure to be hosting Alondra Nelson, joined by our own Melody Barnes and Yael Grushka-Cockayne. Please join me in warmly welcoming them.
[APPLAUSE]
YAEL GRUSHKA-COCKAYNE: Hello and good evening. I guess I could say good evening. It's wonderful that you all join us on this beautiful day. And I'm honored to be sitting here on this stage. And I'm excited about the conversation ahead of us.
My name is Yael Grushka-Cockayne, as was introduced. I'm a professor at the Darden School. My area and expertise is in data analytics and decision sciences. And I think one of the reasons I'm here tonight, although I haven't actually confirmed this, is because we just announced a new LaCross Institute for ethical AI in business.
And I am honored to be one of the co-directors, academic directors of our new Institute. And I'm excited about the conversation tonight.
MELODY BARNES: And I'm Melody Barnes. I'm executive director of the Karsh Institute of Democracy. And I am thrilled to be here with Yael and to be in conversation with our friend, Dr. Alondra Nelson.
And you all heard the screams and cheers when Alondra's name was mentioned. I'm going to tell you why. I get the pleasure of giving you a brief overview of her bio. And normally, you just say, oh, refer to the printed materials.
But this gives me real pleasure because Alondra is, and I'm just going to say it, a badass. And here's what I mean by that. We're going to have a conversation this evening about the transformative changes that technology will bring to our society and to the University and to our lives.
And there is no better person to have this conversation with than someone who has the expertise that Alondra has, and who also has a bio that is as mind blowing as that technology itself.
She is a scholar at the intersection of science and technology and policy and society. And she is currently the Harold F. Linder Professor at the Institute for Advanced Study, which is a Research Center in Princeton, New Jersey.
She's also a distinguished fellow at the Center for American Progress, which is a think tank action tank in Washington, DC. Alondra served in the Biden administration. And her final role was that of Deputy Assistant to the President and Director of the Office of Science and Technology Policy, or OSTP.
She was there from the beginning of the administration until the fall of 2022. She is the first African-American and the first woman of color to set science and technology policy for the country as in ever.
AUDIENCE: Woohoo!
[CHEERS, APPLAUSE]
MELODY BARNES: And in that role, Alondra spearheaded the development of the blueprint for the AI Bill of Rights. She provided guidance to expand taxpayer support for federally-funded research. She served as inaugural member of the Biden Cancer Cabinet, and strengthened evidence-based policymaking, as well as galvanized a multi-sector strategy to advance equity and excellence in STEM.
She's also had a distinguished career in the nonprofit and academic fields as well. She was the 14th president and CEO of the Social Science Research Council, and led academic research strategy at Columbia University, where she was the inaugural Dean of Social Science.
You won't be surprised to know that she's the author of several award-winning books and essays and articles that have been translated into multiple languages. She's currently doing research on a book about Science and Technology Policy in the Biden administration, an essay collection, Society After the Pandemic, which I can't wait to read. That sounds really fascinating, and engage in research on the social power of the platform and governance of AI.
As you can imagine, Alondra holds many honorary degrees and awards. And she's also was named to the inaugural Time 100 list of the most influential people in the field of AI. So trust me when I tell you, I could go on and on. Literally, I could go on and on.
But I will close by saying that I'm thrilled that this very, very busy and accomplished woman is also on the Karsh Institute of Democracy advisory board. So thank you so much. And join me again in welcoming Alondra.
[APPLAUSE]
So I want to start with the first question. And I think the first question should be the obvious first question, which is how did you get interested in and how did you start your work in science and technology.
ALONDRA NELSON: Yeah So thank you for having me. Thank you for that incredible introduction. I'm both here and on the Karsh advisory board because when Melody Barnes calls, you just say yes. You don't wait. You don't wait to find out what the ask is. You just go like, yes. And then you figure it out and hope you don't get yourself in trouble. I'm delighted to be here with both of you.
I think like probably many looking at students in the room, I was a STEM kid. But I was never fully satisfied or fully-- like that was never the kind of world I wanted to be in fully. I grew up with parents who work in technology and in science. I grew up in San Diego, California's biotech, science place. So my childhood was like going to the Salk Institute for after school programs and going to the Scripps Institute. So that's like a kind of San Diego kid like that and surfing, or going to the beach.
Then I got to college. So I was supposed to go to medical school like so many of us. You get tracked, you're supposed to go to medical school. And then I got to college. I went to UC San Diego, where I grew up. And I realized I was much more interested in, people and problem solving and other sorts of things.
And I found myself, luckily, at an institution that had what's still called, but is rare these days, four fields anthropology, which means undergraduate degrees in anthropology, but which means to finish my degree, I had to do physical anthropology, so which we now call biological anthropology. So I was doing biology. I also had to do courses in archaeology. So that was like soil chemistry and geology in addition to a linguistic and sociocultural anthropology.
So the undergraduate degree that I did had both science and social science together. And that perspective, which sort of old-fashioned on anthropology was sort of you can't understand human societies without understanding all of these things. So not the science and its isolation and not the people in their isolation. You need to understand them together.
And that felt so right to me. So I kind of immediately went from being on a science track to studying that. But I still was at an institution that allowed me to take like amazing physics classes and chemistry classes and the like. So that, I think, is kind of where that sort of begins. And so I have always thought about those things together.
I also should say, my mother was an army cryptographer, if you can imagine all things. I mean, she was like a whack or a wave, whatever the thing was, before women could actually be full members of the army.
But as a child, I grew up with her working on sort of big IBM mainframe computers. My mother was a computer programmer and systems engineer. And so there was also a kind of childhood in which I didn't think that women and computation was like an opposite thing. When you're in it back of the Chevy Vega with punch cards that mom was just using in the IBM, you don't think like, God, if only women could work in computing.
You're just kind of like computing is like--
MELODY BARNES: This is normal.
ALONDRA NELSON: It's like detritus in the back of the station wagon. And so I think I just had the benefit of just also having this extraordinary path-breaking mother, I think, who made things that I now know as a teacher and as a mentor and as a policy advisor, extraordinary, seem very commonplace. And so that just gave me, I think, a different perspective.
And then I think more immediately, thinking about my work in the Biden administration, it was-- I had been working on a book about the Obama administration. I had been working on-- I started in 2016 interviewing people who had worked in the Obama administration because I was fascinated by the ability of this administration that you worked in, that was clearly trying to take us back to this kind of bold era of big science.
So it was like under the Obama administration, that you had the Precision Medicine Initiative, which was this initiative to get a lot of American data in a database and think about how we can do a genomic analysis. It was under the Obama administration that you had the BRAIN Initiative, which was trying to map all the neurons of the brain, much like the Human Genome Project.
And so I was kind of fascinated in the way-- while also trying, that administration was also trying as this administration, I think, the Biden administration does, is also thinking about how science and technology and innovation are important drivers for the economy and for education and other facets of the social world that we care about.
So I was absolutely fascinating. And what I saw as like a shift in how we did science and technology policy. And it was through that work that I came to work in the administration.
MELODY BARNES: And so you're now in the administration. The blueprint for the AI Bill of Rights, tell us how that got started and what you wanted to accomplish with that.
ALONDRA NELSON: Yeah. So as a student of the Obama administration, I knew that administration, over the course of 2012 to 2016, had published some very smart that you can still find online white papers around, first, what we call big data, and then by 2016, AI.
And these were sort of broad think pieces and sort of guidance. And you call it the formerly guidance, you'd call it in government, that were about what is the United States need to do if it's going to be ready for really leveraging and interoperating data sets. And how do we need to think about the privacy implications and what are the implications for work and for health care, et cetera.
And then by the time you get to 2016, there are white papers on AI and civil rights, AI and job opportunities and employment and the like. So there had already been all this prior thinking.
And then in the Trump administration, which didn't do a lot of Science and Technology Policy. I mean, we can talk about that in the Q&A if you like. But the OSTP that I arrived to on the first day of the Obama, I mean, the Biden-Harris administration, had only 30 people in it. I mean, we had--
MELODY BARNES: Contextualize that for people.
ALONDRA NELSON: Yeah. So the Obama administration had about 150 people working in that office. And when I was working in the Biden-Harris administration, we had also about the same. OSTP does a lot of things, including just the grunt work of congressional mandate.
So every time there's a piece of legislation, some of it says something like and OSTP, every year, will submit data on the thing or the report on the thing. And so we came in after four years with Congress yelling because you didn't have enough people, actually, to fulfill all these mandates and all of this law had said. So the office had just shrunk.
But what the Trump administration did quite aggressively and well was around AI policy. So in the last NDAA-- National Defense Authorization Act-- of that administration, there was something called the National AI Act. And it sort of stood up a few pieces of infrastructure that then we were able to move with. And the AI Bill of Rights idea I think is both comes really emerges from work that the Obama administration was doing.
But also, I think, some of the work that the Trump administration was doing that we felt in the Biden administration was kind of under-realized. So you had an executive order or pieces of guidance that said things like AI should abide Democratic values or whatever. It's like, what does that mean in practice? Like, how do we begin to implement and think through what that means?
And so the AI Bill of Rights was an attempt to make that granular. What does it mean to say that we've got shared values as society, bipartisan? And how do we make those real. And the way that we are beginning to slowly, in the United States, advance AI governance?
MELODY BARNES: I'm going to turn it over to Yael and come back later. I have another question on that.
YAEL GRUSHKA-COCKAYNE: So we're going to tag team it. And we also threatened that we might go rogue at some point. So as I mentioned, I am a professor at the business school. And we often think about various needs for change and strategy. We think about it as either top-down or bottom-up.
Top-down, it comes from the leadership. It's a vision. And it's kind of been shared with an organization that then they have to follow suit. Bottom-up, meaning pressures from either employees or customers or consumers. It comes from the bottom. And therefore, there's some kind of an adaption or a change that occurs because of a reaction to a pressure coming from the bottom-up.
When we think about AI governance, is your sense in the United States or even globally, is your sense that is this going to be more of an evolving top-down approach or a bottom-up approach? Is it something that is going to come from users, from corporations, or is it come from, for instance, The White House?
ALONDRA NELSON: So I love that you use the term "AI governance" and not AI regulation. Because I think AI governance is both. And that if we are going to get the use and applications of AI to a place that it's mitigating risk and maximally beneficial for the most people, we need to have a suite of tools and levers.
And so those include things that are, hopefully, if we can get some laws out of Congress, top-down, which would be formal regulation, new regulatory authorities and regulatory agencies, actual laws and the like.
But there's also kind of new-- there's standards. So those are technological standards. What are the ways that we should be thinking about how technologies are built and used? What is a high capability AI model? What is low? How do we think about those things? How is everyone using the same language both in the United States and abroad? These are international standards.
And so those are not quite laws. Those are agreed upon kind of definitions in ways that we think we're going to move in the world. And then there's norms. We've got these new tools and systems. So how are we going to use them. No one's to say like we might talk since we're at a University, one of the world's finest universities, how should AI tools and models be used in the classroom.
I mean, that's not-- I mean, so you can create a kind of University law, but it's not a law per se. It's actually a norm that we're kind of slowly creating. So I think we need the bottom-up and the top-down a whole kind of suite of things. I will say, the United States needs a bit more top-down right now.
I mean, we have not been able to have any kind of systematic regulation around AI in the United States, although certainly, the president's executive order on AI is extraordinary in a lot of ways, and in part because, as colleagues and former colleagues in DC say, that the president said we should pull every lever. And so I think you've got federal agency, you kind of using all of the tools at their disposal to try to make sure that these tools are used appropriately and beneficially.
But I think we need a whole kind of panoply of things. And so I don't want us to get overindexed on regulation and understand that there's lots of other things we can do as well.
YAEL GRUSHKA-COCKAYNE: And so maybe if I dig deeper, just if I will, like what happens when the movement that is coming from the ground, from the industry, from the users, what happens with it clashes with the regulation that eventually gets introduced?
I think it's actually a moment where those things are converging. So we hear, both in political theater and performance, from big tech executives when they go to Capitol Hill saying, please regulate us. My God.
MELODY BARNES: It's what we dream of.
ALONDRA NELSON: If you watched the hearing last summer or two summers ago. But they do kind of mean it. I mean, if you talk to folks in industry, they do feel like things are a little bit out of control. And I think it creates a lot of both organizational risk and financial risk for companies when you don't know the kind of basic terms of engagement for the field that you're engaging on. So there's that.
I also think, as AI has moved from in March and November of 2022, shot goes around the world. It's like those of us who had worked on AI for many, many years, people thought we were the most boring people in the world. And then all of a sudden, at the Hanukkah table, at the Christmas table, everybody was like, AI. And everyone wants to talk to you about it, and thinks you're very, very interesting.
So that moment--
YAEL GRUSHKA-COCKAYNE: I know everybody in this room has been doing it for a long, long time.
ALONDRA NELSON: But all of you can remember those moments and you're just like, I'm working on this, like AI, whatever. And your whole family was, oh, my God. Shut it down. But that moment of these sort of behind the scenes technologies becoming consumer facing meant that companies want consumers, whether those consumers are the federal government or individual consumers, how do you get adoption.
And adoption means that also consumers of the technologies, whether or not it's the University of Virginia procurement office or the federal government procurement office or us, as individuals, that kind of adoption that all of these companies want requires there also to be rules of the road.
And so there is, if you look at things like the Edelman Trust Barometer, the trust of Americans in artificial intelligence tools and systems is like low, like some of the very lowest in the entire world. And so if this is the market you're counting on to live out your valuation of 5 billion or whatever, you're kind of multi-- it's probably a trillion valuation. It's not going to happen unless people feel that the tools are safe, that they're responsibly used, that they're not putting their privacy or their families at risk in engaging in the use of them.
YAEL GRUSHKA-COCKAYNE: OK. Fantastic. So thank you for that vision and understanding how hopefully these pressures can coincide.
ALONDRA NELSON: Yeah. I think there are some-- we have this kind of rare moment of like overlapping incentive structures for a little bit of time.
YAEL GRUSHKA-COCKAYNE: I'm going to change topics just a tad and talk a little bit about the fact that in the Bill of Rights, you mentioned one big chapter or some of the discussion there is around algorithm, discrimination, protection to prevent biases from occurring, and protecting various individuals.
There is conversation around race, but it's also about color, ethnicity, sex, religion, and so on. Is this a new tension around the concern of AI and technology with regard to those dimension, or is this something that has been there all along with other technologies, historically? You've studied this for a while.
ALONDRA NELSON: Yeah. Well, thank you for raising-- you raised, I think, two of the issues in the AI Bill of Rights. Certainly, well, it has five sort of prongs. And protections against algorithmic discrimination is one. And it's significant because it's a through line in the whole policy document. But it also is about the fundamental issue that AI tools and systems should be safe and effective.
I mean, that's just fundamental consumer standards. Looking at Professor Citron, they should have some modicum of data privacy. I mean, like shocking, that you should have notice when an AI tool or system is used for a consequential decision about your life. And that you should have some sort of fallback if a decision is made using an AI tool or system, and you have a question about how that decision was reached.
YAEL GRUSHKA-COCKAYNE: And an ability to opt out.
ALONDRA NELSON: And an ability to opt out. And those things were all distilled from almost a year of engagement that we did with academic researchers, industry researchers, folks in civil society, and just regular folks. I think the opportunity and challenge that artificial intelligence presents us-- let's make it take it down a level, let's say generative AI, is that it is a tool that I think brings together lots of different dynamics that makes it different from past technologies.
So it's dynamic, it's iterative. It uses historical data often. The data is not transparent. We don't have a lot of accountability around the tools and systems. People who make the systems tell us we don't understand them. We can come back to that. I call that algorithmic agnotology. And to me, it's kind of a learned ignorance that one wants to have around their systems that should be, I think, pushed back against.
So it does present a different challenge for us. And I think some of the big challenges that we've seen immediately, we can talk about both near-term and sort of farther term risks and harms. But what we are seeing already are harms to people with dark skin, the use of facial recognition technology, who are being falsely identified, falsely arrested, falsely convicted in some instances, that can't be. That can't be how we want it. That's not how the society that we want to live in.
We know that artificial intelligence use of generative AI, that makes it easier to do a cut and paste job and to create, basically, to do cyberstalking or sexual violence is becoming a challenge with generative AI.
So it increases, I think, concerns that we already have the sort of scale and velocity of doing them. So I think it's different from other technologies and that you don't worry about this with the toaster or the car. You might have worried about it with the computer, the introduction of the computer. And obviously, everyone, let's be very clear and not prudish about this. The very first technology innovation with any new technology is pornography.
I mean, when people are like the sex AI chatbots and all that. It's like, of course. Of course. Every sort of wave, if you could create pornography with it, that is what it's been. And so that's always been a special risk in those spaces. But I think this technology in particular presents those problems. I also think we learn more and know more.
So I think in 1989 or '94, when we were all beginning to get the personal computers kind of trickling through our lives, I don't think we even knew how to think about what that might mean. And that transition was pretty slow, actually, if we think about it. I mean, it happened over the course of maybe a decade or eight years or something like that.
We woke up on November 22 or whatever day it was, and chatbots were a consumer-facing product, and everyone in the world had access to it for free. And that was kind of overnight transition. And that makes all of these risks in some way more acute and make it a little bit different from, I think, prior kinds of technology paradigm shifts.
YAEL GRUSHKA-COCKAYNE: Yeah. So maybe in the past, we've had an opportunity to think about it, a just, the academic world, that moves a little bit slower, had time to develop some thinking around it. And in this case, we don't have time because it's moving so fast. I'm going to pass it over.
MELODY BARNES: And I want to come back, given the conversation that we've been having, to something you said a few minutes ago about the AI Bill of Rights and our shared Democratic values and putting, these are my words, meat on those bones.
And I'm curious about the struggle and the challenge of doing that at a moment when it feels as though we don't all share those same values that one group, one person's values may not look like another person's values and how you go about the process or when about the process of policymaking given that challenge.
ALONDRA NELSON: Yeah. The day one kind of a crew of us that were day one in the Obama-Harris administration came into a really crazy scene. I mean, I'll leave it for others and historians to write this book, but we did a presidential transition, not only with contestation and political violence.
So when I first came to DC. Like every building in DC near the White House and near the Capitol was like double fenced. I mean, every store was closed because of the pandemic and everything was just fenced. Like it was unbelievable, actually.
And we did the transition during the height of a pandemic. And so before the sort of White House Council, the National Security apparatus, had approved Zoom for use in the White House. So when I started in the Biden administration, we were still doing conference calls. And the only thing that we could use with video was Skype. For some security reason, I didn't know.
YAEL GRUSHKA-COCKAYNE: I thought you were going to say Webex.
ALONDRA NELSON: No. No. We couldn't even use Webex. I mean, that's new. That's newfangled. You're getting crazy, Yael. I mean, it's crazy, crazy talk.
MELODY BARNES: I can empathize. I remember going in in '08 and '09. And we were in a big meeting and they were like, and you can't take your laptop home. I was like, what? We don't have the security for they got it. But yeah, I mean, technology comes late.
ALONDRA NELSON: We can take them home now but also, you can't use the Google Suite. Like they're just like oh, no. So we were trying to figure out, in that context, which was already different from any other, I think, transition context.
And moreover, the question that we were facing, particularly in OSTP is like, how do you do science and technology policy in a moment where American trust and Democratic institutions and science and our ability to wrangle this pandemic is low-- low, low, low.
And we came in, the leadership with a philosophy that was like, we're going to try to do it differently. So we had, by the time I left, science communicators on the staff for the first time, people whose expertise was to translate science to the public.
So instead of OSTP, as it traditionally done, would pad out a policy document and we wonky people would write it. And if people could read it or not, shrug. And the job is really, we turned it back upon ourselves. And it's just like actually, our job, as people who work for the American public, is to make these documents clear to them.
So part of what you see in the AI Bill of Rights is a commitment to that. What is clear communication to people beyond ourselves about the stakes of these issues? But I'll say about the process, so the AI Bill of Rights is a curious document because it's guidance. It's not formal policy. But we did a formal policy process.
But we also engaged the public. So it is announced in an op-ed in wired that has an email address that goes to the White House at the bottom. So it was like, if you have anything to say about this, write to us at OSTP about this thing.
We took a page from the FDA and did a series of just town halls. So if you've ever gone to an FDA hearing, you get a two-minute timer. And there's a facilitator and they say, anyone can speak. And so we did that. We had several of those. We did them different times of day. So people in different time zones could come.
We had panels on topical issues like AI and health care AI and the workforce. And we had weekly standing meetings that all of the staff working on this workstream had to set aside in their calendar. And we just met with anybody who would want to meet with us, many of whom we met from this email address that was in this op-ed and wired.
I mean, we met with high school students. We met with rabbis. We met with other kinds of clergy. We met with just regular folks. So the typical, I'm new to DC. And so it felt a little weird to me that you had the kind of big civil society organizations and the big lobby organizations. Those are the ones that you talk to. Like, is that how it works? Why? Why do we do that? So we really did have a broad swath of people that we engaged.
And the AI Bill of Rights is really a distillation of all of those conversations. It's not breaking any new ground. It says nothing that I think any document, including a Trump administration document, a lot of that work was led by Michael Kratsios, who's quite good.
People want their systems to be safe and effective. I mean, these are very, I think, common sense claims. We also do talk about issues around discrimination. We talk about vulnerable and marginalized communities. And that's very much the imprimatur of the Biden-Harris administration in thinking about these, but at a high level.
I think that we felt comfortable by saying, you should know when AI is being used. It should be safe and effective. If it's being used for resume screening at a job or to give a health diagnostic tool, that you should have a fair shot at. You should not be discriminated against in the use of those tools if you're trying to rent a house or get a mortgage or get a job.
And you should have some sort of fallback. And so I think we felt-- it took almost a year of that process. But we did try very hard to get to a place that felt that most people would, I think, find it quite common sense, and moreover obvious, and in many instances a kind of resuscitation of what they had said to us about what they thought should happen.
MELODY BARNES: I want to ask a question. We've been talking a lot about AI, thinking about science and technology, generally, as they affect other aspects of people's lives.
So thinking climate change, thinking public health, and how you see, over the course of the next decade or so, the use of science and technology evolving in the policymaking process to try and tackle those big issues, particularly at a moment where sometimes, there's a struggle with data, with facts, and what that means for policymakers as they are also listening to hearing from their constituents?
ALONDRA NELSON: Yeah. That's a lot of hard, I think, issues kind of clustered in that really good question. I mean, I think one of the things that, at least in my time at OSTP and working in the White House, we were committed to is continuing, I think, from the Obama administration in particular, that started things like the Open Government Partnership, that started the first White House GitHub account.
Hello, did you know the White House had a GitHub account? That's all the data nerds in here. There was an attempt to make data available to the public. And I think that we also wanted to do that and understanding that people might take that data and do crazy things with it.
But you wanted to be able-- I think as a policymaker, you wanted to be able to say, we've provided you the facts. We can also provide some interpretation. But I think as we're trying to the extent that we can restore trust in the work of government, part of that is just giving the data to people.
And you can't control how they interpret it. And all of us have been on social media. You see the diagrams with the strings of all the one White House memo and another, and leads to this kind of conspiracy. So you can't control that. But what you can control is a policymaker, and as a leader, is giving people high quality data.
We also had support for that and things like Paul Ryan was a big sponsor of the Evidence Act, which President Trump signed into law, which also has other obligations for evidence-based policymaking, for data that's supposed to be provided to the public and the like.
So there's been, in the last decade, maybe not clear to I think people have not worked in government, a real sea change in how government thinks about its obligations around data to the broader public. And I think that was really important.
I will say, the challenge that we-- the other challenge is that facts alone, or the science alone, don't solve the thing. And that's the work of policymaking. And that's the hard stuff. So coming into government during the pandemic, the science and the engineering was like miraculous on the level of the miracle.
We had the genome of SARS-CoV-2 decoded in less than a month. We had an operational viable vaccine in less than a year. I mean, that has never happened in the history of the world. But then we had all sorts of challenges. So how do you keep it cold enough? Then there were all this kind of infrastructure challenges. And then how do you get people to take it?
And those are questions of social science, behavioral science. Those are questions of whether or not people trust the government, whether or not they trust the research because it was done so quickly. I mean, part of why we engage folks with science expert communication expertise is that it was clear that government had to be a lot better at saying things like, we got it done more quickly because we did all of these other X things more quickly than before, not because we cut any corners or like risking people's health to get this virus done very quickly.
So it's just a kind of different philosophy. And then the other thing I would add, I think, the amazing work that you did in government and the Domestic Policy Council, I think all of that work are now science and technology policy issues.
So not just climate change, but how we're thinking about DHS and immigration, which is using the CBP One app, that you have to have a smartphone to be able to use, which probably many refugees and asylees don't have, if they have a phone at all, and requires that you have very good lighting, and if you're a dark-skinned person seeking asylum. All of those technical questions.
Ditto, the health portfolio, ditto, the education portfolio. So in retrospect, I think my interest also in the Obama OSTP was that I think it was a real kind of awakening that all significant domestic and international policy issues were also science and technology policy issues. And I think that really is where we are now and where we're going to be sort of heading.
MELODY BARNES: Yeah. And in listening to you and your bio, this next question, and your reflection on the fact that science is not just a collection of facts, but a deeply social process. And your background and interest, also in the humanities and that combination of the sciences, and I'm wondering if you could talk a little bit more about that, about the social process.
ALONDRA NELSON: Sure. I mean, let me talk-- maybe, offer a couple of examples. And we can think about gender. And all of this are just as a more granular way. But you should think of it as a metaphor for training data and AI systems. And we can talk about those two, but I want to be less abstract before getting more abstract and talking about AI.
For example, if you think about clinical research that we do and sort of protocols that we have for clinical research and for creating new drugs, for example, we've had, traditionally, pretty much up to the present, all clinical research subjects have almost been exclusively male.
And we've created whole drug ecosystem and diagnostic ecosystem around men and male biology. And so we can put-- we can problematize men and women, all of that. We can have that conversation. But that means that we have created things that we say work for all without actually probing that as an empirical question.
So those are early design choices, in part, made by the scientists who were working-- like, I'm going to test it on my guy, and see if it works or whatever. We think about 18th century science or something, and see if it works.
So that's kind of one example in that if we think about science or biomedical science or something, not just being the drug, but that process through which you achieved it, then if we have pharmaceuticals that work for some people and not for others, we shouldn't be surprised if you're able to think that through.
On the kind of engineering side, if you think about, our crash-test dummies, for example, some of you might know this data, are men, or it's like male physiology. And there was a kind of headline two or three years ago. It was like Swedish scientists create a woman crash-test dummy.
And so you think about that design choice and that somehow, that was supposed to sort of sit-in for everybody else. And obviously, there's all sorts of reasons why that doesn't work. And you can take the male crash-test dummy, make it smaller or whatever, but it's still not quite kind of mainstream or normative, or whatever phrase you want to use, kind of woman's physiology.
I share those as more like obvious examples. But if you take it to the space of AI and you think about historical training data sets around employment or around housing. Computer science, if you're going to do a higher end computer science, who traditionally in the data set?
So if you look at all the resumes of all the successful computer scientists in the history of the world up until 2010, what is all that data tell you? It tells you they're male, tells you they went to Caltech or Berkeley. There's a few data points.
So that kind of data is being used, we know for certain in like resume screening in 2024, depending on the company, there are companies that are much-- I'm a big fan of Indeed and their CEO, who's actually very engaged in these conversations about bias and the training data.
But we're pulling all of this kind of historical constraints and limitations in our design choices around how we're doing AI in the contemporary moment. And so part our mission, if we want to do it better, and do it in a way that benefits more people, is to be willing to ask those social questions, those philosophical questions about how we're getting to the data and the decisions that we're making using these tools and systems, whatever that is.
MELODY BARNES: That's great. Thank you.
YAEL GRUSHKA-COCKAYNE: We're here with the Futures Initiative. And a few weeks ago, we had a talk in the same capacity, related to the future of higher Ed, a very stimulating talk. And obviously, AI and generative AI and STEM, more broadly, kind of play a key role in the future of higher Ed.
In your mind, what are some risks and some challenges, and maybe even some hopes related to how AI affects higher education?
ALONDRA NELSON: Yes. This is a fascinating space to talk about these issues. I mean, first of all, I have a wish that I have said to Sam Altman. So he's well aware of that, that they had actually taken-- that they were not racing to market and they had taken another day, another week, another month to talk to teachers about this tool before they released it.
The freak out that happened with schools banning it and penalizing students and analyzing their papers with bad AI detection tools that do not work. I mean, like this whole thing just did not have to happen. I think even if you had said-- even if you had given them a little bit of a heads up.
So I think whatever market incentive or market desire was behind that, that's just irresponsible. And I think that we need to be able to say that. We could have imagined a rollout of ChatGPT even a month later. I mean, people had seen ChatGPT 3. We'd seen 3.5 and 2. It wasn't like this thing just came into the world and people didn't know that there were increasingly capable tools.
But you could have imagined a partnership with teachers, where they had different kinds of tools, teaching modules, like things that brought it into the world in a way that was less, I think, adversarial and less traumatic, I think, for the classroom.
YAEL GRUSHKA-COCKAYNE: Or at least wait until after winter break.
ALONDRA NELSON: There's that. Right. Exactly. Let the parents deal with it. Go back home. So let me just say that at the beginning, I mean, I think we have to teach differently. And I can say this with some liberty because I don't have students right now at the Institute. And I think that's OK.
I mean, I think one of the things I've been interested about around in the last couple of weeks is Google's introduction of NotebookLM, which has all sorts of problems, including that it hallucinates. So if you're using that, don't think it's giving you-- you need it for something factual, do not use it for something factual. Because there is a degree of hallucination.
But I think one of the takeaways of how people have been responding to that is that we learn differently. And that we are-- I mean, go back to biblical times, where oral cultures. So should we be surprised that students are like, oh, my God, this essay that I'm supposed to read, and it's supposed to have some virtue, my teacher said it's supposed to be more virtuous to read it, has come to life for me with this bizarre chatbot podcast thing, which is how I learn or how I take in information, ditto video and YouTube and TikTok, and all the things that people who are younger than us are much better at and much more interested in.
I think we need to be open to those kinds of conversations. And also, remember that all of our cultures start as oral culture. I mean, the reason we know anything about anything is because we used to talk about it. And that's how people took in information.
And that should be OK. I also think we don't know exactly how these tools can, will be used. I mean, we're still to figure it out to the extent that they should be used. And we can talk about that. I mean, I have concerns, particularly around K to 12 education, the tracking and the data surveillance, particularly for young people in schools around that that we should, I think, really be wary about.
But I think also, students that are going to show us how they're useful. And we can call that plagiarism or we can call it something else and give them other ways of learning and other sorts of tools in the classroom, other kinds of assignments that are different, and that incorporate the fact that this technology exists in their lives. And they're going to use it.
YAEL GRUSHKA-COCKAYNE: So that's very optimistic and that's the hope and the vision. Do you have any concerns, or are you-- do you want to some risks that are important--
ALONDRA NELSON: Well, I mean, I think the risks for any of these chatbots that you use are free is all the data and surveillance that happens. We don't know how the data leakage is working. We don't know enough about the systems to know what happens as your inputs as queries get input to the training data as just part of how the systems work.
So all of that, I think, people should be mindful of. And if you have access to an enterprise software that has more data protection, or even a slightly paid one that has more privacy protection, I think yes. I'm particularly, particularly worried in the K to 12 space. Because we already have demonstrated harms to young people.
I mean, all of their biometric data is being tracked, eyes, hands, all of that. I mean, that's ridiculous. In addition now to things that young people might input into a chatbot, that you don't know. I mean, a child is not going to say like I shouldn't put my mom's Social Security number or whatever, some other kind of sensitive information or the symptoms that a parent is having or I'm having or if I'm ill, into this chatbot.
And so I do worry about that. And then there are just the-- they're the bigger harms. I mean, we haven't about any of those. If we're trying to mitigate climate change using AI systems, it matters that we are using a 1050X more energy to operate these systems.
How do we want to think about that? And maybe it's not worth it. And how do we-- and this is where I think maybe norms, I think, are really important, how do we want to talk about and think about the fact that if you're just going to make a basic query, do we need to say to people, if you care about the environment, use Google, use DuckDuckGo for your privacy.
Don't use the chatbot, which is more fun, probably because it more-- but it consumes a great deal of energy that we can't, right now, that we haven't figured out how to replicate, now that we're moving into-- we're like firing up the nuclear facilities, again, to try to figure out to get enough energy on the grid to figure out how we're going to do all this.
YAEL GRUSHKA-COCKAYNE: So related a little bit to the idea around being responsible and how you leverage generative AI or AI in general, what role does higher Ed have, going back to our previous conversation earlier, in enforcing and educating about the AI policy, the regulation, the governance, what role does higher Ed have in that?
ALONDRA NELSON: Huge. Huge, huge, huge. We don't know anything about these tools and how they work in our world. We're being told how you know about them is to be a good prompt engineer, those sorts of things. But there are scores, hundreds of academic research questions around these tools and systems.
Energy use, what does it mean? How do people learn better if you give them an article on a podcast versus distilling it into bullet points, using a chatbot versus reading the article? I mean, there's all sorts of those kinds of questions.
What does it mean to have systems that output sentiment? I mean, there's just kind of all sorts of fundamental science, social science questions. And then I'll come back to where we began with the thing that drives me crazy is creators of these tools saying, I don't know. We made them. We have no idea how they work.
And I think, in part, because I work at just a research center, where not only social scientists, but mostly historically scientists work on really hard problems, I just think that is an unacceptable answer. And it's only acceptable if you take the market and wanting to race ahead in the market as the only outcome that you're supposed to have.
So I'm heartened that DARPA and other research agencies, ARIA in the UK, are trying to understand the foundational, fundamental mathematics and science of these systems. But that's taking resources away from other things.
Like the companies should be understanding more about these models, but it's more just like, we got it enough to run as a product. We're going to kick it, push it out the door. And we don't know anything else about it. We don't know what it's going to do. But it's getting us the valuation that we want, and it works good enough to send to the consumer.
I mean, that is really problematic. And so I think some of the role of the University is to continue to help figure out some of those problems. But there's a whole swath of other problems. Like how are we going to think about if there's a labor transition coming, not just like accept as I think companies, who have a certain responsibility to their stakeholders. So OK. But what are the other pathways for thinking about work?
How can work displacement be mitigated? Are there other models for thinking about hours of day per work? There's a whole bunch of research that needs to happen that some people are doing, but not nearly enough.
And I think the University is so important. And my worry for universities is that we will be captured doing the cleanup work for companies as opposed to doing the blue sky innovation, experimental work. So I always want to make the case that there's so much research to be done, but it can't just be answering the questions that the market-- the companies don't want to answer, don't want to pay for.
YAEL GRUSHKA-COCKAYNE: I've heard a lot of call for action for business school students, so I'll take that back to Darden and work with our students on some of that. I know that we are going to leave time for the audience for questions, so I'm going to end with my final kind of question to you around we have plenty of students here. What is your advice to the students in the room, both in terms of their studies, but also as they start their professional career in the workplace?
ALONDRA NELSON: Yeah. I would say study what you want to study. The last 10 or 15 years have been everyone has to study CS and learn to code. So I think you should take these very imperfect robots that now can do sometimes good, sometimes crappy code as a get out of jail free card. Just like escape whatever the sort of prison has been that the only way you can live happily in this society is to learn how to code, unless that's what you want to do.
If coding is like your heart's desire, like, go at it. But now, you don't-- there's other ways to think about that. There's other opportunities. And as critical as I am about things like everyone needs to become a prompt engineer, because I do think fundamentally, that is a kind of coding that we will all have to learn how to do to engage with systems.
I also think it is incumbent upon the companies to create systems that have interfaces that are easier for people to use so you don't have to come up with crazy phrases to learn how to get an output that you want. So that's just product design. And right now, it's not, I think, up to par.
But I hope that this moment of generative AI for all of its challenges is also opening up all of the things that we need to work on. There are philosophical questions around, there are huge humanities questions around AI.
Some of them are in the realm of philosophy, but some are just like how do we think about literature in this moment. How do we think about literary theory in this moment? How do we think about communications and media studies? All of these kind of questions. There's huge social science questions, and obviously, there's lots of science and research questions.
So my hope for the students is that this feels like a moment of broadening out, which I saw became a far too narrow path for their lives and their creativity.
YAEL GRUSHKA-COCKAYNE: Wonderful. And I think we're going to open it up for Q&A. We have a couple of mic runners, I believe. And so we encourage you to raise your hand and ask us some questions. Fantastic.
MELODY BARNES: Next question here.
YAEL GRUSHKA-COCKAYNE: Extra points go to the first question.
AUDIENCE: Hi. Thank you for speaking with us. I'm curious what government policy and regulation can do to address the profit-making incentive and the race to market for AI systems.
ALONDRA NELSON: I think regulation can help a little bit. So there can be some friction placed in a pipeline that is sending things out too quickly without things for any other product we would require. Verified pre-deployment testing, that these systems are safe. Any other product, consumer products that you can imagine, that is the case. These tools do not have even that low bar before they're released.
So I think that's pretty important. If someone has to say, compliance checklist, someone has to verify it. There's a lot of different ways you could think about that. And the incentive there is not to slow down the race to market. The incentive there is to ensure that the public has tools that are safe. And we're not there yet. Even as we're talking about catastrophic risks, these tools-- new releases and versions of these tools are being released at a regular interval. And nobody is testing to make sure that they're safe.
I mean, that's shocking. And it's not true of literally any other consumer product. So I think that is a fundamental thing. I also think that we are-- some of the hype from the companies about what the tools will do. Cure cancer forever, fix climate change, all of that are some of these outcomes are things that are never going to be things that the market will do, at least, initially.
And that's part of what government can do distinctly and why, I think, across people's different ideological and partisan divides, we need to figure out a way for government to be able to invest in public goods around AI, to be able to model responsible AI, to be able to use its procurement levers to get us to only buy.
To go back to Yael's question, a conversation we were having if we think about the personal computer and we think about the US Federal government, which is the world's largest consumer, and include DOD, the world's largest employer, lots of people use lots of tools for their thing.
If you think about the personal computer, those came into government over time. And there were these vendor contracts over time. The US Federal government is about to make huge, all at once, massive investments in AI tools and systems.
And that is a really important moment for saying to the market and to companies, we're only going to give you the trillion dollars that the federal government is going to spend over the next two years or whatever on these tools and systems if they meet these certain bars, if they're safe and effective, if we can have some transparency about what's in the training data, if we know members of the American public or protected classes are going to be discriminated against, that you can sort of-- the government can play that role.
And then to the extent that we want goods, like can AI help cure cancer? I don't know. But if there's not a market for it, it really falls to government and government research to invest in research that gets us to a place where there might be some commercialization potential.
And so those sort of public goods and public benefits for AI that are, right now, market failures are super important. And I think, again, across however people might think about, government expenditures and the role of government, it's a super important place in technology, a super important role for government to play.
AUDIENCE: Thank you so much for your talk. And I'm so happy that I finally get to talk to you. I actually use the AI Bill of Rights. I worked with a team of researchers at NIEHS. And there are quite some consequential questions that we discovered in some research that we did comparing public use of AI and experts' use of AI.
So these consequential questions are really like, I don't know if policy, sometimes, speaks to the public about this. Because while America is really open in the Bill of Rights compared to Europe, and probably other countries, in regulations, America has a lot of open conversations around innovation.
And some of these things have to do with America's leading edge in the market. Technology is America's business. How, then, does government and policy have these conversations with people?
ALONDRA NELSON: How indeed?
AUDIENCE: And there's also one piece that we added to that paper about technology being the reflection of its creators. Like AI is, most times, that's a common point that we found in the data about public and experts. People do create things in their own reflections.
So this harms that we find in these technologies, are they not inherently harms that already exist in these people that are now translated onto these technologies? Even stopping the technologies, do we really stop these harms? Those are some of the insightful, consequential questions that we found in that research. And probably, we can share that paper.
ALONDRA NELSON: Yeah. Well, thank you for your comment. I mean, I guess, the one thing I might just tease out of there, but there was a lot there, and we can continue the conversation, is the tension in the United States, in particular-- and your phrase was something like, technology is America's business.
I mean, United the States has, right now, we're trying to keep it, a pretty significant asymmetrical advantage with regards to AI. So that means it puts legislators in a tough place. One of the illustrations of this is if you watch the kind of first big congressional hearing that we had around AI that Sam Altman was at, at one point during that hearing, Senator Joe Kennedy says to Sam Altman, we should have a regulatory free agency for AI. Do you want to run it?
I don't know if you remember that moment, but I watched it very-- like this exactly is the tension. On the one hand, we really understand that leveraging these tools and making sure they're used responsibly will lead to greater adoption, will lead to expansion of markets, like a lot of the things we want.
But right now, the status quo has led to the United States asymmetrical advantage, which is a market advantage and a national security advantage. And so the tension between those two things of wanting to regulate or like don't rock the boat. If we regulate, we might rock the boat. And we don't know what's going to happen with the broader ecosystem. So we're going to grudgingly, inch by inch, maybe do some legislation, except we're not.
One of the things I participated in the Senate Leader Schumer had this AI insight forum, which was a series of, I think, eight or nine meetings that happened. So ChatGPT is introduced November, I think. By June or May or something, Sam Altman on the hill, all this stuff is happening, happening, happening.
And then Leader Schumer says, we're going to have these meetings. We're going to talk to people. And Senate, we're ready to move on legislation. You know how this story ends. Which is we have these nine meetings, I think Elon Musk is at meeting two. I'm at meeting one. There's cameras and things. I'm at the second meeting on innovation.
Mark Anderson's there and others at the second meeting. So there's a series of them. And then nothing happened. Then we get a framework that's basically, to my interpretation, status quo. Let's give some more money to NSF to do some AI R&D. Yes. I mean, I want to be very clear. I'm not just agreeing with that, but nothing came out of that process that was going to be anything different from what we're already doing.
So you identify, I think, a very challenging tension. Then on the other side of that, that means that we are left with the Brussels effect, which is the EU regulation that does bind the companies becoming, by default, US regulation. And so we're in this kind of tricky place, a bit of a prisoner's dilemma, regulatory prisoner's dilemma. And so thank you for raising that.
AUDIENCE: Thank you for the great talk. You mentioned that now, US government is interested in buying these services from technology companies. And I want to follow up on that. And I've been sort of following now like there's a huge sort of recruitment of AI experts within federal agencies like US Digital Service, DHS. And there's a big interest in, OK, what are some AI opportunities for the government, for public services.
And I'm sort of curious what you think in terms of how do we identify the right use cases. How do we make sure that we're not just we're going to train refugees, let's just put a chatbot on it? How do we make sure we're building the right things?
ALONDRA NELSON: Yeah, such a great question. And this is where it gets into trouble that I studied the Obama administration that sort of hatched really important new institutions like the US Digital Service, which is an interesting model for now, but a different model. That model was, how do we get more technologists, which effectively meant engineers, some computer scientists, into government.
President Biden's AI Executive Order sort of mandates this AI talent surge. So it's not just DHS and it's not just USDS. But the people that you need are different. What it takes both capital investment, data investment, and compute to make foundation models, it doesn't necessarily make sense for the United States to do that.
So part of some of what the USDS was doing was building websites, building kind of small algorithmic systems, building scripts that to make things run better in government. It's not clear that the value when you've got llama 3, when you've got three or four or five, including some open source foundation models that can be built upon, that it makes sense for the federal government to build these systems.
So if you're thinking about an AI talent surge, you don't need builders. You need a whole suite of other people who know how to do a lot of other things, who how to think strategically about data, whether or not they know how to use the data, who know how to think about where to find data and systems, how to think about the responsible and ethical issues around the data, think about the privacy issues, think about the procurement issues.
So what is the skill set for having a negotiation if you're, as an agency, with a company to buy or license or use a foundation model for a certain use case for an industry, for a company? So you want people who know-- you need to know something about AI. But do you need to have built a foundation model yourself to know how to make that negotiation? I don't think so.
So I think there needs to be a little bit shift in the philosophy of how we're thinking about talent, even though government needs a lot of technical talent, including AI scientists and engineers. So that's not what I'm saying. But I think we need to think about there's a broader aperture that we need to think about that I think that the Biden-Harris administration is just beginning to sort of think through with these talent searches, but I think is important.
The other thing I would say to you is if you have any walk in you at all, there are two beautiful Office of Management and Budget memos, two memos, one that came out in March and one that came out this week, on government use of AI. They're gorgeous. Shalanda and her team, Jason Miller have done a magnificent job on these memos.
The first one is about how government should think about the use of AI technologies for government services. So things that gatekeep people's access to everything from FEMA benefits that people are desperately needing now to Medicare and Medicaid. What is the threshold of safety and rights abiding and preserving that these technologies should have? So that's what that first memo does.
The one that's just come out late last month is about procurement. And it's about what are the rules that federal agencies should use? What level of transparency? What kind of vendors? How do you keep the transparency iterative since these systems are dynamic and changing?
So it's not just like buying software off a shelf and you do this compliance check at the beginning. There needs to be, as this memo, as government finally understands, a more kind of iterative, dynamic relationship with a vendor around these AI tools and systems as they're going to be changing dynamically over the course of the contract. So they're very interesting.
YAEL GRUSHKA-COCKAYNE: And we have time for maybe one more question.
AUDIENCE: Hi. Thank you for coming to speak with us.
ALONDRA NELSON: I'll assume, perhaps, self-identified women have asked questions.
YAEL GRUSHKA-COCKAYNE: I know. There were a couple of men wanting to ask questions.
ALONDRA NELSON: OK.
YAEL GRUSHKA-COCKAYNE: Don's been waiting patiently. I'm OK with that.
ALONDRA NELSON: Yeah. No, I know. It's really shocking, actually.
AUDIENCE: So my question is, how do we go about making algorithms objective and equitable when oftentimes, it's the data sets that are biased or have gaps, especially when these AI systems are coming out and changing so quickly? Whereas with computers, we had more time to react.
ALONDRA NELSON: Yeah, that's such a great question. Beautifully said. We just have to ask that question again and again and again. It's not a one off question. You don't just ask it at the beginning, before it's deployed. You have to ask that after it's deployed, as it's being used, in which context in use case.
Sometimes, having a data set that's not representative or incomplete doesn't matter for certain uses. For other uses, it matters and the stakes are very high. As we move towards-- I put this in quotes, "general purpose tools" because these tools can not be used for every purpose.
I think we've got to just ask that question again and again and again at many stages in the life cycle of an algorithm and its use, and also, in the context of the specific use cases. So a use case that's around sensitive data with very high stakes like health care is distinct from, I don't know. You're going to ask a chatbot to write a poem for you in the voice of Shakespeare.
So I think we've got to open ourselves up from this one-time, one way of doing things to-- I wrote an essay for Foreign Affairs on AI, how to think about AI governance. And one of the models I used was from-- this colleague mentioned this, to which is the National Institute of Standards and Technology.
Starting in cybersecurity and then with their AI risk management framework, they began to use something called 1.1 or 1.2. They kind do these guidance documents that have versions, almost like software has versions.
So even as we're thinking about governance tools and levers, who I was talking to Danielle about Section 230. I mean, that law is 30 years old. And so that's the way the Senate likes to work. We want to create a law. It's supposed to last forever. So if it's a good law, it will endure. There'll be parsimony around the language, 230 words or whatever. How many? 180 words. 26 words. That's the name of that great book, actually. Yeah. Yeah.
So there's been this sort of obsession, I think, in lawmaking around sometimes parsimony of language and something that's supposed to endure. And I think we've got to give up around new and emerging technologies this sense that the law has to endure. We've got to build into it iteration, versioning, much like kind of software version. And I think this, in government, provides a good example of that.
YAEL GRUSHKA-COCKAYNE: We have plenty of time afterwards for some questions. So those who didn't ask questions, any men in the room, are welcome.
ALONDRA NELSON: All are welcome.
YAEL GRUSHKA-COCKAYNE: Anybody, welcome to ask. But gosh, we've talked about so much. So it's been fascinating. We started with the fact that women have always been in computing. This is not new. We've talked about the AI Bill of Rights, of course, and your amazing work and leadership there about how AI and generative AI. There's a reason why we're feeling this anxiousness because it is moving faster than any other technology.
And we have not totally digested and figured out how to stay on top of it. We've talked about whether regulation and governance should be bottom-up or top-down. And maybe it's a little bit of both about the safety checks and the responsibility of a lot of our business school students that are on their way to lead some of this innovation.
We've talked about the White House technology from Skype to GitHub. So quite a lot of movement there. We've encouraged higher Ed to play a role in answering some key questions on the research front and to embrace the technology in a variety of different ways.
And finally, we've encouraged our students to study what they want and to recognize that there are many different ways to find their way into the tech conversation. And not all of them are obvious up front, and they don't require computer science.
But please, I hope you join me in thanking, not only my co moderator, Melody Barnes. Thank you very much--
[APPLAUSE]
--for being here. I'm so honored to meet you. I've waited for this for many years. And of course--
ALONDRA NELSON: We all feel that way.
YAEL GRUSHKA-COCKAYNE: And of course, for Dr. Nelson for visiting us here in Charlottesville, thank you.
[APPLAUSE]
ALONDRA NELSON: Thank you, Yael.
[CHEERS, APPLAUSE]