ผู้เผยแพร่แอสเสท

Interviews

KAS-Strathclyde Interview Series on AI, Global Governance and Ethics: Interview with Dr Aneja

Dr Urvashi Aneja is Co-Founder and Director of Tandem Research, a think tank based in Goa, India. She leads Tandem Research’s AI in Society Programme.

Dr Aneja shares her views on India's data governance, AI strategy and regulating the business of BigTech companies.

ผู้เผยแพร่แอสเสท

แชร์หน้านี้

Question: Can you give us an overview of the AI and ethics situation in India and your work on it?

Urvashi Aneja: The Indian government put out its draft National Strategy for AI in 2018, called AI for All, which identified some of the key sectors in which they were hoping to develop AI solutions and most of these were for the social sectors, health, education, etc. A lot of the challenges and risks and some of the broader societal implications did not figure as centrally in that strategy as we thought they should have. Our aim with our program has really been to look at these sectors and look at what are the different knowledge inputs. What are the different types of expertise? What are the different types of issues that need to be brought into this conversation to have a fairer, more inclusive, safer, AI strategy? We have been hosting a series of policy labs with government, industry, civil society, other relevant stakeholders on each of these sectors, to identify, in some sense, a civil society perspective on these issues. What are the ways in which we can align AI trajectories with broader societal goals, if at all? I also do now a fair amount of work looking at governance: what are the kinds of governance challenges in the Indian context and how can they be overcome? We see that a lot of conversation around ethics, governance and regulation around AI so far is based on the experiences and capacities of industrialized economies. We need to build that evidence base for a country like India, and equally think of any policies that are specific or at least speak to the Indian context.

Question: What has happened since the Indian Government released the draft AI strategy? Has much happened since then in terms of new policy strategies, initiatives or even new legislation to govern AI in India, or have things remained fairly static since that policy was issued?

Urvashi Aneja: There has been a bit of both. So on the one hand we have the draft National Strategy document. A final strategy document was never released thereafter. The government has made a number of announcements and put out a couple of white papers, for example on trying to build a cloud infrastructure for AI-specific applications in India. Within specific sectors, various regulatory arms have released guidelines that discuss the role of inferential decision making - for example in the financial sector, the Securities and Exchange Board of India has released reporting instructions on the use of algorithmic decision making systems by mutual funds. But we do not have an overarching governance framework or positioning on ethics or a statement on accountability or explainability. There are many pieces that are missing, but the biggest or most obvious one is that we do not have a data protection law yet. A draft has been sitting in the Indian Parliament for a while. Experts have given their comments, including us at Tandem, but the Personal Data Protection Bill has not passed yet. Even if it is passed, it does not adequately deal with the issue of algorithmic or inferential decision making. There is now a white paper out on the Regulation of Non Personal Data (NPD), but that still is at a very early stage and what we have so far is deeply problematic. It assumes that non personal data can be legitimately used for the purposes of state or commercial reasons. One of the ways it defines non personal data is personal data that has been anonymized but we know that such anonymization does not work, especially with new advances in machine learning and other techniques. The paper also assumes that having NPD will be key for India to remain competitive in the global AI race. But advancements in AI field show that large quantities of data may not be needed - so claims to need data for AI competitiveness may soon become very tenuous.

On the policy front, we have not seen much movement in a substantive or conclusive way. That being said, there is a great deal of enthusiasm for using AI in a number of public functions. So everything from facial recognition technology to now dealing with the pandemic. The focus of governance in India is on how we can fuel the AI industry, rather than on how to govern it as a field of study and what our red lines might be.

Question: Why is the Data Protection Bill so important for the broader discussion on AI governance in India?

Urvashi Aneja: At the most basic level, until we get a sense of how we are protecting people's data and that data being the building blocks of AI systems, that has to be a first level right. There has to be some sense of who owns this data? Who is protecting it? What happens when it is used for different purposes? What are the grievance mechanisms? What are the ways in which it is not be used? Can it be combined? How can it be combined? Do we want to firewall certain types of combination? How long can it be stored? All of these questions will have a huge impact on how we think about the AI ecosystem. More generally, even from the government, there is a strong sense that data and having more data and having better data and having AI systems running on that better data is key to development gains, is key to nation building - even if this might become less important as the field advances. If that is the starting point, there needs to be a framework to govern that space. But it is certainly not the be-all and end-all of the conversation around governance.

Question: Are there other, pre-existing areas of law, regulation, governance, strategies - like competition, other human rights - that have been associated with the debate around AI in India?

Urvashi Aneja: No, a lot of the mainstream policy discourse has been focused on the data question. I think that is also because some of the use cases that are emerging, or that have been the most prominent, have been particularly worrying from the perspective of what it would mean for privacy and data protection - such as facial recognition for  example or the contact tracing app that is now being used by the Indian government. But while data protection is important, that is not all. Competition policy for example, has a big role to play. If you do not have competition policy that can accommodate the effects of data control and can accommodate network effects, you are going to have huge monopolies and these monopolies will have the capacity to build AI solutions. If the market is dominated by two or three big actors, then those two three a big actors are also setting the privacy standards. We also need to establish certain collective rights about what areas of social life should undergo datafication. We need to decide not just how to regulate AI adoption, but also what part of social life we want to introduce ML.

Question: Is competition law and policy more developed and implemented in practice as compared to data protection?

Urvashi Aneja: There have been statements from the competition regulators talking about the need to update competition policy to reflect data, to reflect network effects, to reflect vertical and horizontal integration in markets, and there have been instances of the competition authority in India calling out Google and Amazon for anti-competitive behavior. That being said, Facebook's investment into Reliance was approved by the competition authority without these concerns being raised. Competition policy sets the monetary threshold above which it would want to look at a deal as anti-competitive but in the case of big tech companies, that monetary threshold may not be the best benchmark for identifying whether a deal could have anti-competitive effects on the market. If there is a proposal to merge data between these two actors that may not be the best thing for a healthy market even if the investment percentage is small. But it is not just competition policy. There are other things that are important, which are not being discussed, like mandating platform neutrality, ensuring platform interoperability. There is a growing conversation around data stewardship and data trusts as being ways in which one can balance individual privacy, commercial interests and broader development objectives.

There has also been some conversation around how do you regulate social media platforms. Last year, the Indian state did release an amendment to the set of guidelines for these intermediaries, which has still not been finalized. The guidelines mandated that social media platforms which have over a certain number of users should be using algorithms and AI for content moderation. It is really piecemeal and not a well thought-out conversation because the use of AI for content moderation could create a whole bunch of problems, because of coded biases, human biases and problems with explainability.

Question: Has there been any discussion about differentiating between different levels of risk in India?

Urvashi Aneja: No, there has not. In the Indian context, the narrative is so focused around innovation and nation building and ‘leap frogging’. The development challenges are so immense that everything else is, in some sense, just a barrier to be crossed or a hoop to circumvent, but not as big challenges in themselves. There is no adequate public recognition by government authorities on what the risks could be: either they are naive as to what the problems would be with AI, or they choose to ignore them, or they think they are not as important given the range of possible benefits. NITI Aayog just released a draft document on responsible AI which does refer to ‘risk’ but only in passing. It does not question whether AI adoption is needed in the first place, at what scale, in what sectors, any redlines.

Question: You mentioned that AI has actually been used in public functions in India with facial recognition being an example. Has there been much discussion about regulating these particular uses and/or regulating public uses of AI differently from private uses?

Urvashi Aneja: There has been discussion and that conversation is ongoing right about how these how these uses should be regulated. A lot of that is often quite reactionary to what the government is putting out - so you have an announcement for an AFRS system being used for the criminal justice system and then you will have a response to it. But I do not think there is an open granular conversation at the level of government. It is just starting at the level of civil society - the distinction between public and private systems. The conversations around AI are also happening within a very small community, it is not something that is making the national news unless there is a big explosive incident. It is not something that is part of the public discourse and domain within a broader discussion about the use of these technologies. There is a lot of hype around AI. So on one hand, a lot of companies brand themselves as being AI-first and may not necessarily be doing any AI. At the same time, the companies which are doing a lot of AI may not be branding themselves as AI-first, downplaying that. Within the private sector, in consumer applications I am not sure how much public awareness there is of when machine learning systems have been used.

Question: So there has not been a huge amount of public discussion around AI and instead it has been confined to more expert circles in India?

Urvashi Aneja: Yes. It is not in the public discourse as a conversation about why is Netflix showing me a particular kind of content. There is not a broad-base sense of the ways in which machine learning systems are now shaping how we live, and what do we do about it or should we be concerned? It is much more issue-specific. One example is facial recognition. Another is misinformation and hate speech that has been a concern in India, like many other places. There have been a number of instances of lynching and violence and discrimination in India which may have raised some issues around AI. Even there, most of the conversation is focused on the right to freedom of speech versus public health and safety, not so much on the algorithmic amplification of it.

Question: Can you tell us about the makeup of the AI industry in India or industries using AI? Is AI being developed in India or are big companies from other parts of the world selling AI to India?

Urvashi Aneja: There is a bit of both. A lot of the large tech companies are developing specific algorithms for specific kinds of uses, and those are being picked up off the shelf and being applied, particularly in the B2B and enterprise sector for back-end business offices, back-end business operations, etc. At the same time, there are new startups, particularly in the social sectors like health and education, which are looking to develop homegrown AI solutions and products. But the bulk of it is still being led by larger global technology companies, who have the finances and, most importantly, have access to that data, computation power and storage space. One big issue for smaller companies in India is just not having access to enough structured data and the process of cleaning up the data etc being very cumbersome and expensive.

Question: What about India's global relations around AI strategies and governance? Does the Indian government have a globally-oriented strategy? Has AI policy, whether internationally or domestically in India, been affected by incidents such as an increasing tension with China?

Urvashi Aneja: India definitely has a global outward looking strategy vis-a-vis its AI program, officially and rhetorically. You will frequently hear Indian Ministers and parliamentarians saying, we missed the boat on previous Industrial Revolutions so we cannot miss the boat on the fourth industrial revolution. Equally even in our draft national strategy paper, the Indian government does talk about India being a “garage” for developing AI solution for the rest of the developing world. You have a huge population, you have a high amount of mobile and internet penetration, and you have a unique set and diverse set of development challenges. India could become a place to experiment and test solutions for the rest of the developing world, as something that we could export to the rest of the developing world. At a global level, I do not see India being too participative, but I am sure that will change.

The Chinese angle is interesting and very topical. The promotion and investments in Reliance Jio, India's biggest tech company now, in which Facebook and Google have recently invested is certainly shaped by the by the concerns around China, and the attempt to build a domestic tech ecosystem. The Indian consumer is reliant on China-based apps and Chinese tech hardware. Globally at a geopolitical level, we know of the rising animosity between China and the US and we see Western countries blocking Huawei. With these lines and divisions that are emerging, and with Google and Facebook and Microsoft all investing into Jio, they are looking very keenly at the Indian market as the next big thing. Certainly that is driven by their own commercial interests, but at a broader policy level, for the Indian government it is also a way to build a tech ecosystem that is not so reliant on the Chinese.

Question: Do you think the way India talks about data sovereignty or data as a public good is impeding India from increasing data rights for communities domestically?

Urvashi Aneja: It is tricky. On one hand India does want to have greater sovereign control, it wants to rein in the power of big tech companies. There is a strong argument of ‘Indian data for Indian development’. Even the conversation on non-personal data right now is very much about that, because in the Personal Data Protection Bill, there was a clause which said that companies will be required to handle over non personal data to the state for the purposes of development and so on. There is a strong sense that Indian data should be for Indians and there is a strong sense that there is a kind of digital colonialism that is happening, where global tech companies are extracting the data of Indians and there is an unfair distribution of value. India is Facebook’s largest market, but Facebook does not have a single data center in India, for example. But the question then is that if you say that Indian data is for Indians, who is then is the relevant actor? Is it going to be Reliance Jio? Are we saying that, let us not have these global big monopoly powers and let us instead have Indian monopoly power, or are we saying that instead of having global big tech, we should have big state tech? I do not think either of those two is where we want to end up. If we are to be serious about using community data for community purposes, using data for benefiting the public good, then there is much more that needs to be done than just making sure the data does not leave India. You do need to think about competition policies, you need to think about building your own regulatory capacity, you need to think about how you can build, develop your domestic tech ecosystem and most importantly you need to think about ways in which to bring to the fore community needs and rights.

Question: Do you think that the approach of prioritizing development and innovation may lead to some unintended consequences for the Indian government? Is there a risk that suddenly a particular implementation of AI suddenly becomes very controversial and there’s a backlash against the government or against particular companies?

Urvashi Aneja: We are already seeing it in some ways. Aadhaar, the biometric identification system, has some use cases where AI is being used to identify people. Even if you take the AI conversation out of it and think of that as an example, there have been unintended consequences. There has been a huge amount of exclusion and even a large number of deaths because of it. There have been dubious uses of Aadhaar data, there has been a number of cases of misuse and cyber security breaches. But somehow within the media and more broadly, we seem to brush those issues under the rug and just carry on. In the Indian context, development needs are so immense and there is a strong sense that it is just too much, revealing that: we do not have data, we need better data, we do not have data. So a lot of it often just comes down to an ideological conversation or dispute amongst various stakeholders. What do you prioritize? Is it worth the risks? Is it worth the trade-offs? Is it okay to have these problems, or is it okay to have these unintended consequences, if it brings so many other kinds of benefits? Is it a question of time, is it just that the tech will improve, and over a period of time, it will get better? That seems to be the kind of narrative that we see in in policy circles and even within industry, that it will get better: we will have more data and then it will get okay. But trying to engage with the question of data being political and representative, and it is not a question of more data equals better AI, you won't fix the problem with the AI by just expanding your data pool - I do not see that conversation happening within policy circles, but I am not sure that is happening elsewhere like the EU either.

Question: Do you think that in Global South contexts such as India, given the limited state capacity, lack of digital literacy and understanding on these topics, that AI should only be implemented in certain areas like manufacturing but should not be brought to other aspects of social life where it could have a much more dramatic impact and worsen notions of equality and social justice - or at least not before the AI is tested?

Urvashi Aneja: I agree. I think that there are technological choices to be made, about where we want to use these technologies and how we want to use them and we should not be using them uncritically or in a large scale. There is a lot of scope for sandboxing and specific use cases. Now with COVID-19, manufacturing and services industries are also changing and that is also having an impact. There might be the potential to use AI in certain kinds of places, such dangerous or demeaning jobs like manual scavenging which is still an accepted practice in India. Many hundreds of people die from manual scavenging every year. So that is where I would like to see a robotic intervention and a smart manual scavenger. A lot of industrial plants and processes are harmful for people because of the toxicity, because of the chemicals and so on. They could be select places where AI can be used.

Another thing which is happening but not often talked about is the business case for using AI for social sector gains, in development, healthcare and education. We see a lot of pilots around healthcare and education and urban agriculture, but it is not clear how that pilot goes to market. If you are talking about using AI to reach the masses, then in those situations there is not a ready business model. For example, the reasons for using AI in healthcare is that it could improve access to quality healthcare, it could address India’s poor doctor-patient ratio. But if you look at where those AI solutions are being adopted, it is only private hospitals in large cities that are able to afford those solutions. In a rural setting where you do not have infrastructure, you have a lack of resources there is not really a ready business case for the adoption of these solutions. Then, either you need huge investments by the state to take those solutions to scale or it is going to be Big Tech. If it is a huge investments by the state, then the question is why are you not making those investments anyway, making them in teachers, making them in schools, making them in infrastructure? And if it is Big Tech that is going to make those investments, then you have a whole other set of issues around democratic accountability, monopolies, and sovereign control.

So my answer to your question is, yes I do think that should be the case. It could also be that business interests also push that to be the case, that some of the hype around using AI for social sector interventions doesn't quite take off. But the worry for developing country like India is that, then what is happening to all that data? And the number of pilots that are being run around India for healthcare, for education - if there is no afterlife to them, and it's not obvious how they turn into products that can reach people at scale, then what is happening to that data? What is the purpose of the intervention? How else is it being used? So I think that is where also some of the concerns arise around data sovereignty or digital colonialism. What is the purpose of these interventions?

Question: What do you think are likely future developments for AI governance and ethics in India?

Urvashi Aneja: Healthcare is the space to watch for in the coming years, we are already seeing India has a plan to have a health stack and now national digital health ID. With that, we might see more sectoral-based guidelines emerge. Equally, I think we will possibly see a lot more in the education space. Another one to watch is how some of these conversations around data and around competition play out in the coming months and years.

The question around non personal data is going to be an important one. It will be interesting to see to what extent India invests in building that cloud infrastructure, building that data sovereignty infrastructure and building an independent homegrown AI ecosystem and the role of large businesses in that, like Reliance Jio. But there is a long way to go. It is a very early conversation in the Indian context and I think mostly right now it is just a lot of enthusiasm and a lot of hype. There is a long journey ahead in thinking about many of these issues. How it plays out is a political question. It is not just a question of the tech itself and how it will work. Even the way that we govern AI will certainly be a political conversation.

Rule of Law Programme Asia, Konrad Adenauer Stiftung and School of Law, Strathclyde: Many thanks, Dr Aneja.

The interview was conducted by Dr Angela Daly (Senior Lecturer of Law, University of Strathclyde) and Ms Aishwarya Natarajan (Research Associate, Rule of Law Programme Asia, KAS). We welcome your thoughts, suggestions and feedback. Dr Daly can be contacted at a.daly@strath.ac.uk and Ms Natarajan can be contacted at aishwarya.natarajan@kas.de.

ผู้เผยแพร่แอสเสท

ติดต่อทีมงาน

Aishwarya Natarajan

comment-portlet

ผู้เผยแพร่แอสเสท

ผู้เผยแพร่แอสเสท