Amandeep Singh Gill, the United Nations tech policy chief, speaks during an interview with The Associated Press, Friday, Sept. 22, 2023, at U.N. headquarters. (AP Photo/Mary Altaffer)


UNITED NATIONS (AP) — Artificial intelligence, and how and whether to regulate it, has gotten a lot of discussion in and around this year’s U.N. General Assembly meeting of world leaders. With a U.N. advisory group on AI set to convene this fall, the world organization’s top tech-policy official, Amandeep Gill, sat down with The Associated Press to talk about the hopes, concerns and questions surrounding AI.

Here are excerpts from the interview, edited for length and clarity.

___

AP: A number of national governments and multinational groups are talking about or beginning to take action on setting guardrails for artificial intelligence. What can the U.N. bring to the table that others can’t?

GILL: I’d say three words. Inclusiveness — so bringing a lot many more countries together, compared with some of the very important existing initiatives. The second one is legitimacy, because there is a record of the U.N. helping countries and other actors manage the impact of different types of technologies, whether it’s bio, chem, nuclear, space science — not only preventing the misuse, but also promoting inclusive use, peaceful uses of these technologies for everyone’s benefit.

The third one is authority. When something comes out of the U.N., it can have an authoritative impact. There are certain instruments at the U.N. — for example, the human rights treaties — with which some of these commitments can be linked. (For example, if an AI feature) leads to the exclusion of a certain community or the violation of the rights of certain people, then governments have an obligation, under the treaties that they have signed at the U.N., to prevent that. So it’s not just a moral authority. It creates a kind of compliance pressure for living up to whatever commitments you may sign up for.

AP: At the same time, are there challenges that the U.N. faces that some of the other entities that are active on this don’t — or don’t to the same extent?

GILL: When you have such a big tent, you have to have a good process that’s not just about ticking the box on everyone being there, but having a meaningful, substantive discussion and getting to some good outcomes. The related challenge is getting the private sector, civil society and the technology community involved meaningfully. So this is why, very consciously, the Secretary General’s advisory body on AI governance is being put together as a multi-stakeholder body.

A third limitation is that U.N. processes can be lengthy because consensus-building across a large number of players can take time, and technology moves fast. Therefore, we need to be more agile.

AP: Can governments, at any level, really get their arms around AI?

GILL: Definitely. I think governments should, and there are many ways in which they can influence the direction that AI takes. It’s not only about regulating against misuse and harm, making sure that democracy is not undermined, rule of law is not undermined, but it’s also about promoting a diverse and inclusive innovation ecosystem so that there is less concentration of economic power and the opportunities are more widely available.

AP: Speaking of equal opportunities, some people in the Global South hope AI can close digital divides, but there’s also concern that certain countries may reap the technology’s benefits while others get left behind and left out. Do you think it’s possible for everyone to get on the same page?

GILL: That’s a very, very important concern, something that I share. For me, it’s a reason for everyone to come together in a more nuanced way: going beyond this dichotomy of “promise and peril” — which often comes up in the minds of those who have agency, who have the capability to do this — to a more nuanced understanding where access to opportunity, the empowerment dimension of it, beyond “the promise and the peril,” is also front and center.

So, yes, there is the opportunity, there is the excitement. But how to seize the opportunity is a very, very important question.

AP: There’s a lot of talk about bringing together the conversations going on around the world about regulating AI. What do you think that means, and how can it be realized?

A: Having a convergence, a common understanding, of the risks, that would be a very important outcome. Having a common understanding on what governance tools work, or might work, and what might need to be researched and developed, that would be very valuable. A common understanding on what kind of agile, distributed model is needed for governance of AI — to minimize the risks, maximize the opportunities — would be very, very valuable. And finally, having a common understanding of the political decision we need to take next year at the Summit of the Future (a U.N. meeting planned for September 2024), so that our effort across those functionalities is sustainable and has the public’s understanding and the public’s trust.

AP: When it comes to AI, what keeps you up at night? And what makes you hopeful when you wake up in the morning?

GILL: Let me start with the hopeful side. What really excites me is the potential to accelerate progress on the Sustainable Development Goals by leveraging AI, particularly in the priority areas of health, agriculture, food security, education and the green transition. What worries me is that we let it go forward in in a way that, one, deludes us about what AI is capable of; and two, leads to more concentration of tech and economic power in a few hands. These may be very well-intentioned individuals and companies, but democracy thrives in diversity, in competition, in openness.

So I hope that we take the right direction and that AI does not become a means to kind of subvert democracy, to delude society at large and reduce our humaneness. Those are the kind of questions that I worry about, but I’m overall very optimistic about AI.

Copyright 2023 The Associated Press. All rights reserved.