Skip to main content

University of Michigan School of Information

Menu

403: The path to global AI regulations with Merve Hickok

Information changes everything: the podcast. Merve Hickok, Lecturer, University of Michigan School of Information and Founder, AIethicist.org. News and research from the world of information science.

Listen to UMSI on:

Information Changes Everything

News and research from the world of information science
Presented by the University of Michigan School of Information (UMSI)

Episode

403

Released

June 4, 2024

Guests

Merve Hickok, lecturer at the University of Michigan School of Information; responsible data and AI advisor at Michigan Institute for Data Science; affiliated faculty at Gerald R. Ford School of Public Policy; president and research director at Center for AI & Digital Policy; and founder of AIethicist.org

Summary 

In this episode of Information Changes Everything, we look at the complex world of AI policy and regulations with Merve Hickok. Hickok is a globally renowned expert on AI policy, ethics and governance, and the founder of AIethicist.org. She brings a wealth of knowledge from her roles at the University of Michigan School of Information and the Michigan Institute of Data Science. 

Resources and links mentioned

Reach out to us at [email protected]

Timestamps

Intro (0:00)

Information news from UMSI (1:20)

Hear excerpts from Merve Hickok’s 2023 seminar “AI policy and regulation landscape” at UMSI (2:43)

Next time: Paul Woods on designing the future of automotive UX (19:10)

Outro (19:56)

Subscribe

Subscribe to “Information Changes Everything” on your favorite podcasting platform for more intriguing discussions and expert insights. 

About us

The “Information Changes Everything” podcast is a service of the University of Michigan School of Information, leaders and best in research and education for applied data science, information analysis, user experience, data analytics, digital curation, libraries, health informatics and the full field of information science. Visit us at si.umich.edu.

Questions or comments

If you have questions, comments, or topics you'd like us to cover, please reach out to us at [email protected].

Merve Hickok (00:00):

I think for the first time ever, we have seen a US president writing an opinion piece in Wall Street Journal kind of asking both the Democrats and Republicans to work together for a bipartisan legislation saying we need the guardrails.

Kate Atkins, host (00:16):

That was UMSI lecturer Merve Hickok speaking at an event organized by UMSI assistant professors Paramveer Dhillon and Sabina Tomkins. And this is iInformation Changes Everything, where we put the spotlight on news and research from the world of information science. You're going to hear from experts, students, researchers, and other people making a real difference. As always, we're presented by the University of Michigan School of Information UMSI For short. learn more about us  at si.umich.edu. Today we'll hear more from Merve Hickok. She gives a high level review of AI policy landscapes across the world. She's the founder of AIethicist.org and is a globally renowned expert on AI policy ethics and governance. at the University of Michigan She's a lecturer at the School of Information, the Responsible Data and AI advisor at the Michigan Institute for Data Science and Affiliated Faculty at the Gerald R. Ford School of Public Policy. Before we jump in, a few other people and projects that you should know about.

(01:25):

Last semester, UMSI students, Stephanie Vettese, Ziyan Zhou and Clayton Zimmerman used their skills to reimagine the public spaces at U-M’s Bentley Historical Library. Their goal? To help the U-M community feel comfortable accessing the library's archives and resources. 

(01:47):

UMSI researcher Elsie Lee-Robbins wanted to know what would happen if she made data visualizations out of Lego bricks. Since then, she's been inspiring students, staff, and faculty to participate in a creative project that fuses data and play. 

(02:02):

Since 2018, the University of Michigan's Go Blue Guarantee has provided up to four years of free undergraduate tuition for high achieving in-state students whose families meet the income criteria. The goal of the program is to keep a U-M education accessible to Michigan residents and with the recent creation of the UMSI Graduate Guarantee program, UMSI has become the first U-M school or college to extend the free tuition program to support graduate students in information. For more on all of these stories, check out si.umich.edu or click the link in our show notes. Now back to Merve Hickok.

Merve Hickok (02:47):

So great to be here actually talking first time, talking at SI. Although I've been with SI for three years now. What I do in my work outside of SI is really focus on AI policy and regulations around the world. And I would like to give a whirlwind tour of what's happening in the US and around the world globally. So let's step back. I know as the AI policy and regulations have been the topic of discussions more lately, it is the sexy topic of the year, especially since the launch of generative AI systems, but actually has a few years of history behind it. And I would like to take us back to 2018 and kind of build on what has happened since then and why we are here today. So in 2018, my organization have come up with universal guidelines for ai, which is 12 concepts, 12 norms that put rights for individuals in the age of ai, how they interact with AI obligations for organizations, whether public or private, on how they use, how they deploy AI systems.

(04:05):

And then also it included a couple of prohibitions and I'll go a bit more detail on what those are in a minute. But this was on the back of Japanese Prime Minister Shinzo Abe talking about AI governance, the need for global AI governance back in 2016, 17. And we created this group of academic researchers, advocates and policymakers. More than 300 of them are representing 40 plus countries to come up with these universal guidelines back in the time. And then the next year, 2019, these guidelines were brought to OECD, which is the Organization for Economic Development and has membership from some of the biggest countries in the world that cooporate globally on anything that has to do with the economics and development of a country. So in 2018, 19 time OECD picked up or AI governance as a focus area and have after significant discussions came up with OECD AI principles, which are the trustworthy human-centric AI principles that many organizations, many countries around the world since then have picked up.

(05:25):

Same year 2019 G seven G 20 group of countries have also picked up OECD principles. So it has increased in the number of countries that have adopted it. So 50 plus countries in total have then adopted it. And now fast forward, UNESCO in 2021 deployed UNESCO AI recommendations which centers ethics, inclusion, human dignity principles and recommendations to countries 193 countries adopted this two years ago. And there are also a couple of prohibitions in there with social scoring, ban on social scoring as well as ban on mass surveillance. Similar timeline, 2000 starting from 2018, European Union started discussing AI policy and regulation and what that would look like for the union as a single market. And they've started with a whole year of listening sessions, high level experts come in and trying to figure out what norms should be prioritized, what should be the focus of a EU regulation.

(06:39):

2021, they announced a draft regulation, April 21 and since then have been discussing going back and forth between organizations gathering public insight on what the EU regulation should look like in its final form, looking for a completion at the end of this year or beginning of 2024. Earlier in the year G seven countries met in Hiroshima, Japan after six years now has the presidency again, and this country is the seven biggest countries joined hands and said, we need to align on international AI governance and what would that look like? Oh by the way, generative AI systems are also introduced into the equation, what should be our global approach to governments of generative AI systems and AI in general. In the meantime, African Union as a union joined G 20, which is for those of you who might not be involved in AI policy or regulatory like global regulatory conversations, this is a huge news because G seven, G 20 country, G seven and G 20 as organizations are to a certain extent, rightfully, are criticized for not representing all the countries globally, that it is a lot of US and EU countries and Anglo-Saxon countries represented.

(08:19):

So African Union joining G 20 as a plus one is really big news because we hope to see more inclusion and different perspectives brought into this conversations as well. Council of Europe, which is very different than European Union. So Council of Europe is an organization with 46 member states, obviously includes all the European countries, but it includes countries on top of those includes countries like Mexico, Turkey, Azerbaijan, Georgia, et cetera. US is an observer state to council of Europe and what is being done in the Council of Europe is for the last three and a half years is trying to draft an international AI treaty. So while all this other stuff is happening, council of Europe is drafting international AI convention, which would be open to every single country in the world who would like to ratify, accede and ratify. So hopefully we'll see that convention work finalizing in sometime in 2024 and we'll be a game changer.

(09:35):

In the meantime. In the United States, like I said, AI policy and regulation conversations have not been at the forefront of the policymakers up until the beginning of this year, but we now see multiple executive orders and bipartisan agreements between both across the aisle, Republican and Democrat on the need for some sort of regulation on ai. So if you are not following any conversations on the house or the house or the Senate side, there is almost one hearing a week that is currently happening trying to tackle AI, AI governance, AI legislation from different angles. And in the meantime, China has a number of AI regulations and legislation in place. In fact, while everyone is looking at Europe as the leading AI legislative region of the world, which to a certain it is, China has been experimenting on AI regulation for several years now, trying to figure out what works, what doesn't work, what kind of guardrails should be in place.

(10:47):

Majority of the focus in China is focused on explainability and transparency, but it is a good experiment for the rest of the world to see can you actually do this. If you impose these disclosure requirements, transparency requirements, can you actually do this? So let's step back and kind of go a bit more into detail in all of these. Like I said, we introduced universal guidelines for AI back in 2018. These are not principles at this point. I think we have more than 200 principles or guidelines that have been introduced, and everyone, every company, every country has been defining it the way that they want. We have been very adamant that this should not be just principles, but actually rights for individuals, rights for communities, obligations for companies and governments, as well as prohibitions on certain AI systems such as unitary scoring or secret profiling, especially by governments which are so fundamentally in conflict with human rights and fundamental rights that they do not have a place, they should not have a place in any society.

(12:09):

I think good way to, since I didn't mention it at the very beginning, my organization, CAIDP is a human rights defender organization, but we work at the intersection of AI policy and regulations. So everything that we do that we recommend to governments, international organizations, et cetera, is focused on how to promote human rights and democracy and rule of law at the age of AI policy. So it encompasses anything from labor rights to immigrant rights as AI systems are being used, ai, mass surveillance systems to with autonomous weapons, anything that you can think of that would undermine human rights with the use of ai, you'll see us being in that conversation. Like I mentioned, the guidelines. Guidelines have found, been the foundation of OECD AI principles. OECD AI principles have been the first global AI framework in that sense, carried on the same norms, the rights and norms from the universal guidelines.

(13:22):

But obviously these are OECD AI principles are value-based principles. It is not clear because it's voluntary, right? It is not clear how any country in the world would implement this. How do we evaluate the implementation and what would happen next? It's one thing to put out these principles, what do you do afterwards? So three years ago, three and a half years ago, created, launched this project of, okay, let's look at how to evaluate this implementation and let's look at the AI policies, national AI strategies of different countries around the world and compare what is it that they actually commit to, and then what do they do In practice? Because you can commit to anything, say human-centric, we prioritize human rights and democracy, et cetera as a country and then turn around and deploy, for example, predictive policing or mass surveillance systems, which is in conflict with what you committed.

(14:28):

So we wanted to understand that gap. So we created this methodology based on 12 metrics, 12 questions to assess each country's policies and practices in ai, and we'll look at human rights, universal Declaration for Human rights. We'll look at OECD principles that I just mentioned. We'll look at UNESCO AI principle, UNESCO AI recommendations, and then we'll look at some of the democratic values and global norms such as public participation, independent oversight. So if you take for example, a single country like US, has the US public been involved in or participated or had a chance to be involved in US' national AI strategy or its AI policies, is there an independent oversight body that would hold US governments, federal agencies accountable for their actions or even the corporations, et cetera? So we asked this 12 questions to 75 countries that we are assessing, which is quite undertaking if you look at it.

(15:40):

And it is 1000 plus pages. It makes for great bedtime reading, but we also do this analysis every single year. So the current version that you're seeing right now is the third edition. So every year we go back to our existing analysis and what has happened since last year in terms of policies and practices do a complete review. It also creates this longitudinal analysis, right? So you can see, because we're comparing against the same metrics and assigning scores against each of these 75 countries, you can see how any country has moved shifted in terms of its AI policy and practices over year. This has been really interesting year. President Biden as well as VP Harris had taken this on personally. It has been one of the signature items for the administration as well. I think for the first time ever, we have seen a US president writing an opinion piece in Wall Street Journal kind of asking both the Democrats and Republicans to work together for a bipartisan legislation saying, we need bipartisan legislation, we need enforced enforceable legislation and regulations.

(17:01):

The guardrails, I think, whether or not they're the right ones, we’ll see a lot of the focus that we have been. A lot of the work that we've been building our foundations is universal Declaration of Human Rights. So we consider that as the global norm and protection of human rights prioritized over other things, or if you have a regulation, human rights and rule of law should be promoted and enhanced by those regulations and by those AI systems as well. So what should be some of the guardrails that you put in your AI system to enhance human rights and benefit the majority of the society benefiting from the systems? Generative AI has been, I think is creating more questions. So up until this point, a lot of these norms have held more or less, but generative AI system has kind of, I want to say threw in a wrench, but we don't know what we don't know because the systems are so new for AI and algorithmic systems, automated decision making systems, we had decades of experience. So you kind of know the impact on civil rights, on environment, on society, whatever with general purpose AI systems, gen AI systems. That is more yet, we'll see what happens and we'll see if those norms and guidelines and frameworks still hold. So G seven countries are trying to come up with that governance method and norms for regenerative AI systems. So we'll see by the end of the year what that will look like as well. But even in that conversation, again, their underlying focus is human rights and democratic values.

Kate Atkins, host (18:59):

You can watch the full talk by clicking the link in our show notes. To learn more about upcoming events and conversations like this, visit us at umsi.info/events and tune in next time to hear from award-winning designer and author Paul Woods. Paul was invited to speak to automotive user experience students at UI in 2023. During his talk, he shared why automotive UX is one of the most exciting areas a designer can be working in today.

Paul Woods (19:30):

In my experience, the shortest distance to get things in front of people is the path to success. Had all sorts of experiences where things weren't put in front of people for a very, very long time, and there was hundreds of presentations and decks and it looked very important. There was a lot of small fonts on it, but then you put something in front of a user like, yeah, I don't like it. You'll never know that until you test it and put it in front of someone.

Kate Atkins, host (19:54):

That's in our next episode. Before we go, do you love being the first to know about new information science research? Then UMSI’s Research Roundup is for you. It's a free summary of the latest findings from researchers at the University of Michigan School of Information to sign up, visit umi.info/research-email or click the link in our show notes. The University of Michigan School of Information creates and shares knowledge so that people will use information with technology to build a better world. Don't forget to subscribe to information changes everything on your favorite podcasting platform. If you have questions, comments, or episode ideas, send us a note at [email protected]. From all of us at the University of Michigan School of Information, thanks for listening.

Information Changes Everything: The Podcast

Information Changes Everything: The Podcast

News and research from the world of information science, presented by the University of Michigan School of Information.