Responsible Application of Generative AI – Human-AI Collaboration

By Konrad Sowa

Guest post by Konrad Sowa, PhD Researcher and VC Associate at Kozminski University & SMOK Ventures.

Generative AI is everywhere by now and it doesn't look like it's going away. Although, despite rapid technical advancements, its widespread adoption has been relatively moderate. This presents a brief window of opportunity to discuss responsible, human-centric implementation of this revolutionary technology in a workplace.

This blog post explores the journey from human-computer interaction to collaboration. We'll discuss the benefits of augmentation over competition and envision the future of human-AI partnerships. By embracing responsible AI practices and nurturing collaborative relationships, we can unlock new possibilities for innovation and progress.

From human-computer interaction, via human-AI interaction to human-AI collaboration

Human-Computer Interaction

Human-Computer Interaction (HCI) focuses on designing and evaluating user interfaces, enabling seamless interaction between humans and technology. Traditional HCI relies on graphical user interfaces (GUI) or other input devices. 

Starting with the simplest computer mouse and keyboard in the 1960s, to widespread adoption of touch interfaces in smartphones in the late 2000s. Using clicks, buttons, sliders or other visible elements of GUI, people have been communicating their needs to electronic devices for more than 60 years.

 

Computer mouse, Douglas Engelbart 1968. Source.

Human-AI interaction

With generative AI, we are currently experiencing a completely new interface, with an uncharted, sometimes still uncanny, mode of interaction – using our natural language. For now it includes voice and written text, sometimes gestures or even facial expressions in the future. It means that we made computers adapt to our way of communication. Rather than clicking through ChatGPT and selecting the desired task, we explain with words what we mean and what we want it to do. And it talks back at us!

When was the last time that some computer program didn’t work and you angrily clicked or shoved your mouse somewhere and said how much you hate this computer under your breath? Well, now, when you write to generative AI that you hate their work, it is going to apologize to you for poor performance. That’s new.  

Screenshot from ChatGPT

In human-AI interaction designers and developers should be applying the same good old principles derived from many years of development of science of human-computer interaction, plus a few new ones. Although, human-AI collaboration is a completely new game.

Human-AI collaboration

Human-AI collaboration (HAIC) is a newly emerging field. With hundreds of papers being published quarterly, it is developing fast. It assumes that not only people will interact with AI of all kinds to get what they want, but that interaction will soon turn into collaboration. That means that humans do part of a task, AI does another, and this interaction feels collaborative for humans.

One may say that by definition collaboration happens between people. Another could say that if interaction with AI feels collaborative to the person interacting, we might as well call it so. A clear distinction to determine what is collaborative and what is not, proposed by some researchers, is that HAIC happens when AI is (1) proactive and (2) independent.

Proactiveness means that AI makes suggestions or acts on its own accord, rather than being triggered by a prompt. Independence means performing their job autonomously, e.g. while a human does something else in the time. Think of J.A.R.V.I.S. from Iron Man or K.I.T.T. from Night Rider as great examples of proactive and independent AIs. By the way, most generative AI (such as ChatGPT) is not there yet.

Augmentation, not competition for knowledge work

Is AI going to take your job? Is AI going to replace ‘X’? You can put any knowledge (white collar) job at ‘X’ and the answer will always be – “probably yes, but only partially”.

For starters, according to McKinsey (2022), “AI” (as a field of science/development/work) creates tons of jobs. Yes, most of them are in tech, but bear with me.

A great report published by the World Economic Forum in 2020 said that work split between people and machines (for both blue collar and white collar) was 67% people - 33% machines, but by 2025 it will balance to 53% people - 47% machines. The same report says that some job loss is inevitable (estimated at 85M by 2025), but eventually, with new jobs being created (estimated at 97M by 2025) the net effect will be positive. I’m not sure if WEF analysts knew about the widespread introduction of generative AI in 2022, but assuming they didn’t, this change may be even faster. So, it is a fact that this change is taking place.


World Economic Forum, Future of Jobs Report 2020. Source.

Human-AI collaboration in a simple form (not proactive and not independent) is already happening. There are plenty of doctors who use AI for better decision making (e.g. analysis of medical imaging). I used chatGPT to proofread this article. Marketers use graphic generators for social media posts, and I’m sure plenty of you have also it at work. Did it ever replace you at your job, or perhaps made it easier, faster, better, more pleasurable? 

A responsible thing to do at the moment is to urge organizations to adapt generative AI in a responsible way. Responsible in that sense, means to always keep the human in the center of development and deployment. To create workspaces, workflows, jobs which utilize what’s best of AI (such as data analysis, structured problems) and what’s of humans (such as empathy, creativity, unstructured problems). 

Benefits and challenges of human-AI collaboration

Benefits from HAIC are obvious – increased productivity and job satisfaction. Less time spent on tasks that people don’t want to do leads to less frustration and more happiness. In a 2020 study it was proved that human-AI collaboration increases productivity, job satisfaction on subjective and objective measures. Another pressing issue is enormous labor shortage in many industries in both white and blue collar jobs (see this report by US Chamber of Commerce, 2022). With higher efficiency of each individual workplace, organizations can elevate productivity of the entire organization, therefore answering the problem of lack of available talent on the job market. But again – this is achieved by augmenting, not replacing people. 

Key challenges of HAIC involve trust, hyper-personalization and education. Firstly, AI needs to be built in a way that people who will collaborate with it will be able to trust it. Trust is essential to any collaboration, be it between people, or between people and machines. A human worker needs to be able to trust that their electronics counterpart will do their job well.

Secondly, in another study, in 2021, it was shown that people have utterly different collaboration and communication styles and they expect AI to adapt to them. Pragmatically, it means that for some people AI should be less proactive, not challenging a person's viewpoints, not suggestive, and communicate in using specific language. For others, it should be a direct opposite. This poses a huge challenge for developing collaborative AI technologies which are supposed to serve masses. 

Lastly, educating people needs to start now. Organizations need to inform their employees on how and where they are going to apply collaborative AI, and how to use it best. As said earlier – this is new to people, and it is employers’ responsibility to inform and teach.

Human-AI collaboration, generated by DALL-E

 

SIGN UP FOR THE DSS PLAY WEEKLY NEWSLETTER
Get the latest data science news and resources every Friday right to your inbox!