On 13 February 2019, Webber Wentzel and Microsoft hosted a forum entitled
Designing and Regulating AI Amidst Uncertainty. The forum hosted top minds from business, government and the legal fraternity in the field of artificial intelligence (AI) who discussed the future of AI and the associated complexities around future laws and regulation.
The event, which was hosted by Warren Hero (CIO of Webber Wentzel) and facilitated by renowned technology editor and columnist Toby Shapshak, included Aalia Manie (Webber Wentzel Partner); Theo Watson (Microsoft's commercial attorney for the Middle East and Africa); Timothy Wolff-Piggott (Senior Data Scientist at Data Prophet); and Deputy Director General of the Department of Science and Technology, Imran Patel.
As an introduction to AI, Shapshak began by outlining how AI will impact all facets of business and society. Although AI already exists in our day-to-day lives, this technology is ultimately going to see processes that have traditionally been performed by humans being automated or undertaken by software, machines and automated processes. The technology will become so universal that it will be offered to consumers as a service, much like cloud services today, explained Shapshak.
The forum heard how, because of its complexity and scope, AI serves as an umbrella term for a number of different technologies, including robotics and machine learning. As each of these individual technologies become ever more pervasive and complex, it will be imperative for lawyers and regulators to get their heads around the impact these technologies will have on business and society.
AI is certainly going to throw up many philosophical and ethical debates, including questions around the future of employment and the need for skills development and social grants. On the legal front, Manie explained that lawmakers will have to grapple with how AI fits into existing laws and the law will need to evolve in response to technological changes. A particular concern relates to how liability will be allocated in the event of defective AI and how intellectual property should be owned and licensed given the chain of people involved - including technology owners, coders and software designers, and end users. In addition, she noted that drafting contracts around service delivery and liability will present a unique set of challenges. Data (an important input for training AI systems) was another issue Manie raised, asking businesses and AI developers to consider: whether they have all the rights needed to process and exploit the data lawfully; who should own the data and the results and outputs of the data once processed; and how the data will be owned, managed, protected and used in the future.
Patel then expressed government's views on the regulatory issues around AI and machine learning. He explained that although government can see the many benefits this new technology brings, it is still critical to ensure that it is used for the upliftment of society. He called for absolute transparency in terms of how the technology is created, designed and used. Patel then said that all of these issues are further complicated by a lack of global consensus on moral and ethical issues.
Amidst all these concerns, there was a call for pragmatism when it comes to issues around AI. Manie added that government should be balanced and proportionate in its approach to regulation to ensure that it does not discourage innovation and local investment in AI. Wolff-Piggott said that resistance to new technology often results in it being vilified, giving the example of self-driving cars. Tesla has shown that its self-driving technology is three times less like to be involved in an accident than a human driver, but they are still the technology is considered unsafe. He noted that people were very risk sensitive to new innovation.
Taking all the inputs into consideration, there was consensus that lawmakers and regulators must stay up-to-date and in-step with these rapid technological developments, and that there must be wide engagement and extensive, robust debate between stakeholders across society and business. Watson concluded: “I shudder to think what we are going to have to think about in the future to break these issues down. It is going to be very difficult." Clearly, creativity, philosophy and ethics are going to be vitally important human attributes when it comes to regulating AI in the future.
For more insights from the day, watch the video below.
For more information, contact one of our specialist advisers.