INKOVEMA Podcast „Well through time“

#217 GddZ

Ethical aspects of artificial intelligence and counselling

Why we don't have to thank ChatGPT – or Siri!

In conversation with Cornelia Diethelm

Cornelia Diethelm is a Swiss digital ethics expert, entrepreneur and board member. It is actively shaping digital change at the interface between business and society. As head of the CAS Digital Ethics programme at the Zurich University of Applied Sciences (HWZ) and lecturer in numerous degree programmes, she passes on her knowledge.

Diethelm studied political science, business administration and economics, later completing an MAS in Digital Business at the HWZ and further training in technoethics. She has many years of experience in the private sector, the state and non-governmental organisations. In 2018, she founded her own company, Shifting Society AG, which includes the Centre for Digital Responsibility (CDR).

In addition to her work as an entrepreneur, Diethelm is a member of several boards of directors, including Ethos, Metron and Sparkasse Schwyz. She is also co-owner of the LegalTech company Datenschutzpartner.

Cornelia Diethelm was honoured as „Top Voice“ by LinkedIn and voted one of the top 100 women in Switzerland by „Women in Business“ magazine.

Well through time.

The podcast about mediation, conflict coaching and organisational consulting.

Contents

Chapter:

0:03 – Introduction to ethics and AI
3:51 – The importance of ethical reflection
8:25 – Challenges and risks of social media
16:16 – Dealing with data protection and ethics
24:21 – The role of humanity in AI
31:14 – AI agents in business
35:40 – Autonomy and self-determination in technology
39:17 – Decision-making work in dealing with AI

Summary of content

In this episode of the "Gut durch die Zeit" podcast, we look at one of the most pressing issues of our time: ethics in artificial intelligence (AI). I speak to expert Cornelia Diethelm from the Zurich School of Business, an experienced business economist, political scientist and economist who brings a wealth of perspectives to this complex field. We discuss how the Rapid development of new technologies not only opens up numerous possibilities, but also raises ethical questions that we must not ignore.

Cornelia emphasises that technologies are never neutral or objective. They are the product of human decisions and cannot be evaluated without taking our values and ethics into account. Our discussion begins with the most fundamental questions: How does AI work and what ethical considerations do we need to make when integrating it into coaching and mediation experiences? We explain that a basic understanding of how AI works is crucial to identifying the right applications for our needs and values.

We scrutinise how our perception of technologies such as the internet or smartphones influences our view of current AI. Cornelia emphasises that we often only recognise the negative consequences when a technology is already fully integrated into our lives. This reflection leads us to the challenges of digital change and the potential risks that AI brings to counselling settings in the field. We need to actively ask ourselves where these technologies can enrich our practice and where they may jeopardise values or reinforce stereotypes.

A central theme of our conversation is the responsibility we bear when we use AI. Cornelia argues that although we can use AI as a support, we as professionals must remain accountable for the outcomes. We must also make a clear distinction between humans and machines when using AIto avoid misunderstandings about the nature of these tools. Her criticism of the humanisation of machines implies that we need to be aware of what we expect from AI. A blurred boundary between human and machine interactions can lead to a loss of control.

The discussion about self-determination and decision transparency is particularly important. Cornelia encourages that counselling professionals can actively work with AI to better manage the complexity of their cases, but always within a framework that maintains a clear awareness of ethical standards. We reflect on how user behaviour changes when assistive tools are integrated into counselling practices and talk about the importance of setting ethical standards when introducing new technologies.

This episode is a call for critical reflection on the rapidly changing landscape of technology and its impact on our work in counselling, mediation and coaching. We advocate an active engagement with these issues to ensure that we recognise and address not only the innovation gains but also the ethical challenges of the digital society.

Complete transcription

[0:03]
Introduction to ethics and AI
[0:00]Autonomy, self-determination, that's a huge, huge topic. You always have to make decisions when you build these systems. So they're not neutral, they're not objective…Welcome to the podcast "Gut durch die Zeit", the podcast about mediation, conflict coaching and organisational consulting. Podcast from INKOVEMA. I am Sascha Weigel and welcome you to a new episode. Today we're talking about artificial intelligence, i.e. new technologies and the ethical challenges they present us with.
[0:34]This is because ethical aspects can, on the one hand, lead to people not engaging with new technologies in the first place or rejecting them outright. And in order to approach this topic properly, I have invited an expert who has been working on this subject for years. Mrs Cornelia Diethelm from the Zurich School of Economics, a business economist, political scientist and economist by training. She has been working on this topic for years, both in the private sector and in governmental and non-governmental organisations.
[1:20]In 2018, she founded her own company, Shifting Society AG. Above all, this includes the Centre for Digital Responsibility platform. And it was through this platform that I got to know her at a training event and thought that Ms Diethelm absolutely had to speak here in the podcast on the subject of ethics and AI. Welcome, Mrs Diethelm. Yes, thank you very much for the agreement. I'm very pleased to be here. Perhaps this is the educational framework. After this degree, I will be responsible for a degree programme on digital ethics and will also be able to introduce these aspects into various degree programmes, so to speak. So of course it's great to be able to pass on knowledge to the next generation at a school that you know as a student. I used to work independently in larger companies, but I also worked briefly for NGOs and in administration and have now been my own boss for around seven years, a one-woman business so to speak, and I love it.
[2:22]And that's also how I got to know you. I took part in your online training series on ethical aspects and artificial intelligence. And that was also the starting point for me to say that I need to ask Ms Dieter in the podcast questions here to make this more concrete for the consulting world, especially for the consulting world of individual and small companies, i.e. coaches, mediators, small consulting firms, also in organisational consulting, etc. Because the topic of artificial intelligence seemed very promising and very useful to me. Because the topic of artificial intelligence seemed very promising to me and it made a lot of sense to deal with it, but the stumbling blocks are also enormous, so that even individual consultancies or small networks of cooperation partners can get into a real mess. And that was the point where I said I had to ask you. Fortunately, you accepted immediately.
[3:25]I think that's an important point, this counselling, because a lot of people are looking for guidance. And it's usually the case that an outside perspective perhaps brings in fresher ideas or more independent points of view. And that's why I'm also convinced that the entire consulting sector will be confronted with this in the field of AI and can also create benefits.
[3:51]
The importance of ethical reflection
[3:48]What is a good starting point for approaching the topic? Especially when I, as a counsellor, also seek advice in this case and say, okay, what significance does this AI have for me and, above all, with a view to ethical components and ethical issues, which generally also play a role in coaching or mediation and conflict counselling.
[4:15]One of the most important things is certainly an interest in understanding the basics. So how does artificial intelligence work? And you don't have to be interested in technology to understand that. You can deal with this topic once and then you have a foundation. And I think the most important thing to say as a starting point is that it's not about the technology, it's still about the people, it's about the content, what do I want to do? And then you simply have to draw on this knowledge to know whether AI could make a contribution. And in order to know whether it can make a contribution, you also have to look at the opportunities and risks. And these are not of a technical nature, but have to do with our values.
[5:00]A classic example is the false idea that technologies are neutral or objective. Artificial intelligence is made by humans from A to Z. It relies on data and can therefore never be objective or neutral. And you can also say, yes, humans also have their shortcomings. Yes, but we can't replace a deficient human with a deficient machine. So I think we need to be a bit more ambitious. And can we draw on experience with other technologies? Reflect on where technology has had an impact and where people have been faced with very specific ethical questions. Perhaps we can now draw on this experience. Or has a technology been developed with AI that makes it difficult to risk comparisons with other technological developments without oversimplifying?
[6:04]Basically, I would say that we can build on the experiences of the past, even if we don't know exactly where it will lead. But I think we very often no longer realise how revolutionary the internet was, for example. The internet came about 20 years ago. And of course that brought massive changes in how we can meet people or where we get our information from. The world suddenly became bigger. We have become networked. We also looked into areas that had a different language, a different culture. So that was very revolutionary. Or the mobile phone, the smartphone. When I was a teenager, smartphones didn't exist yet. So sometimes we only see this revolution in hindsight. And that also shows very nicely, the internet and the smartphone, how much it has already changed our lives. and just as we had it with data, it will also be the case with artificial intelligence. And were there similar questions to be discussed back then, so when I come to talk about the ethical aspects of this technology, which is supposed to be a core content today, it doesn't immediately occur to me or is easy for me to link this with the smartphone or the internet as such, that it could be unethical, but rather it was, if I put it so one-sidedly.
[7:26]Now knowledge is being democratised and we are all becoming broadcasters and not just passive television viewers. Maybe that's because we already have a certain distance, because they've established that we only see the positive aspects, but not everyone has access to the internet. So we already have a certain digital divide. Or even people who naturally don't speak English, who don't have a lot of information or training opportunities. Or perhaps a classic issue with smartphones is really these manipulative practices on social media, for example, or this constant scrolling, so that we can't get away from it, can't calm down, or these constant comparisons on social media. So that can also lead to psychological stress.
[8:25]
Challenges and risks of social media
[8:21]I think social media is a good example of where these ethical questions arise. Or even with social media, for example, the whole polarisation that we have.
[8:31]But this can also be the case, for example, when we think of the internet, of digitalisation, of prices. Sometimes we don't know what price we are being shown and why we are being shown it. Even with recommendations, we don't really know how they come about. I think there are a lot of points that we are simply not so aware of at the moment, because artificial intelligence is of course very, very dominant at the moment. Yes, that would primarily be aspects that arise through and during use and through the intensive use by many and the masses. In the case of AI, it seems to me, but I can also only imagine to a very limited extent, that the use as such is immediately subject to ethical questions, especially in the field of, let's say, human counselling and the very humanistically influenced counselling in the field of coaching and mediation, that people tend to shy away from it or that the answer is rather no, it's not necessary, it won't get any better than wishing for this technology with hope and longing so that something can be made possible. While this is the case with the Internet, it can, as I said, be a one-sided reminder.
[9:51]Previously it seemed to me to be one-sidedly affirmative, the question was not asked: is it ethical to use it, but rather it was about getting everyone to use it as quickly as possible, because it is a disadvantage not to be connected to the Internet.
[10:10]In addition to this distance, perhaps this difference is also due to the fact that when we talk about AI today, we very often mean generative AI. In other words, we are becoming producers. And then we are perhaps more affected by it than we might be with social media, which happens passively. Or with recommendation algorithms or prices. It's true that we are very often affected by the data or this predictive AI, but we don't realise it directly, where this production is now somehow simply much closer. But then I really do notice the euphorics on the one hand.
[10:50]So I'm also the person who is very critical on many occasions. And then there are others who are incredibly anxious and only see the negative. And then I have the role of emphasising the positive aspects. So all in all, I'm convinced that everything has its positive and negative sides. And that's why we need to be smart, that's why this AI expertise is so important, so that we can confidently decide where it can deliver added value and where we certainly don't want to spread stereotypes because the data is simply outdated. Where do we still have to take responsibility for the result? Nobody is taking that away from us. You can't say, yes, that's just not true, but Chatschipiti wrote it, not me, that's not possible. Another big ethical issue is, of course, humanisation. So which of us says thank you and please when we work with Chatschipiti, it's humanly understandable that we immediately project something onto the machine. But that can also be problematic because we then simply treat the machine as if it were a subject, a lame human being. And that's not the case and we should perhaps make a distinction here.
[12:11]So it's better not to say thank you? Yes, for a while there was also this tip that it delivers better results. But from an ethical point of view, you really shouldn't do that because it simply creates additional familiarity. And we don't know exactly what the effect will be if we communicate more and more with machines that are very similar to humans. In other words, we should simply not reinforce this transfer of human norms to machines. We are emotional beings, we develop empathy, especially when something comes across as empathetic. But we simply shouldn't reinforce this, instead we should use our common sense to draw a line.
[12:56]So precisely because we are human, not saying thank you to a machine is more appropriate than perhaps saying on a naive level, but I'm just being polite. And if an answer comes my way, then I simply have to say thank you. They don't tend to say that, it's naive, it's a machine and you don't say thank you to a machine. Exactly, it's actually like, well, sure, that's a convention of its own, but you can say like, I don't have to be friendly to a machine like I do to a person. I can just treat a machine correctly, but it's a tool. I should also treat machines with care. I shouldn't just break them or replace them after a short time. But I shouldn't confuse this relationship, so to speak, or this interaction with a human interaction. I find this interesting because I heard a colleague say this in a different context, who recommended that the best results in conversations with JGPT, i.e. also in terms of language, so to speak on a smartphone, are achieved if you imagine it is a person to whom you are explaining things, just as you would explain them to your colleague. And it makes sense for the quality of the output to imagine a person, so to speak.
[14:18]Which has a double meaning in it, precisely because it's a machine to have to do that. I find that interesting, especially in the respect of saying thank you, rather not. Exactly, perhaps there are two aspects. One is actually the technical aspect, because it is based on probabilities and needs as much context as possible in order to work well. It is quite possible that if you have such urgency or a special courtesy, the result could perhaps be better. But from an ethical point of view, we always make a distinction between a human being and a machine. And if we treat a machine like a human being from an ethical perspective, then we overestimate the machine. Then we also place too much trust in the result. It can also be the case that people suddenly have the feeling that this machine could now have a consciousness. And it also makes us vulnerable. And that's why we might have to use our minds to ensure that we use this machine correctly.
[15:18]Just as we take the environment into consideration, but we simply don't put it on the same level as a human relationship. Let's take a very practical approach. As a user who is concerned with data protection so that they act in accordance with data protection law, who takes care to be technically well connected, how do you best approach the topic of ethics in the use of AI? Is it something that is formulated during use and where we realise, yes, we need to go in that direction and then go back again and then develop standards and realise, yes, the thank you thing is perhaps not appropriate after all and be prepared, so to speak. To develop an ethical standard for use and during use? Or does it make more sense to say, I have my ethics and I'm going to let them be
[16:16]
Dealing with data protection and ethics
[16:13]I don't want the AI to break me and I'd rather not use it or something? So how do you approach the topic?
[16:20]Yes, it probably depends on the individual person. There are people who have a very clear idea, for example in the area of privacy. This then guides their actions. How much data do I disclose and what do I do with these tools? And there may be others who are more interested in trying things out and then reach certain limits and suddenly realise, is this even right? And then quickly google it or ask someone. I think both options are possible, but in principle I would also argue in favour of experience. And that people who don't have much to do with the word ethics should perhaps simply think about their own values. So what is important to me? Privacy is important to me, but it's not equally important to everyone.
[17:02]Or it is important to me that I have a modern understanding when I generate a subject. So it's important that I have enough diversity or that I don't have any outdated stereotypes. In other words, it's really about critical reflection. Am I satisfied with what comes out? Am I satisfied that I am choosing provider A or B? This also has to do with preferences. For example, do I want a European tool or do I want an organisation where I simply have the feeling that the decision-makers express themselves relatively responsibly. I like that, because certain products are somewhat comparable. So you always approach things by thinking about what I really want and what is not good for me or what I don't want to support.
[17:55]There are so many aspects in there that I don't want to choose one so quickly because I don't want to leave the others behind. It definitely seems to me that you need to make your own personal selection in order to approach the topic. So if I take a look at the counselling situation that I have in mind now, because many listeners also deal directly with people in the form of conflict counselling, mediation or coaching and also see this as a problem-solving foil, the direct conversation with the other person and the question of how do I use new technologies or am I even betraying my values to a certain extent is one thing. How do I use it? But I also assume that in future, people who come to us for advice, similar to Google, will have already informed themselves about new technologies or reflected on them beforehand. What does this mean for providers of such, shall I say, personal services, that in the future people will presumably come to us for advice, infected, AI, contaminated, AI-experienced.
[19:14]Hey, you who are listening to this podcast, if you like it, why don't you press five stars and leave feedback so that others who haven't listened to or found the podcast yet can do so. And now we continue with the episode in the podcast gut durch die Zeit.
[19:34]Of course, the doctor comes to mind, doesn't it? That will probably be a very wide range. Some will be really misinformed and it's difficult to correct that. That's why we need this AI expertise to know that this result may be wrong. And that will be even more important than we perhaps know from Google or internet searches in general. Then there are people who are better informed and demand more, are perhaps almost on an equal footing. And then there are certainly also people who help themselves. There are also people who no longer need a specialist for certain things. I'm always cautious about this, but there may be aspects that are not so important to the individual or that they don't consider to be so complex and where they can get relatively far with tools like these. That's what I think. That could be a relief. So in various professions, it can mean that you actually have more difficult cases and fewer simple cases. But it could also be that damage is done because you help yourself and then perhaps need even more counselling than before.
[20:45]So from that point of view, I think it's still difficult to estimate what effect it will have. And, of course, it could perhaps also be complementary. It's a big issue in psychotherapy, because JGPT has such an empathic effect that certain people feel they can do it with it. But experts also say that they think it will be more complementary for reflection, for certain tasks or even before you have to go to an expert, so that you have some inspiration. How am I doing today, writing this down? These are perhaps even methods that people used to simply…
[21:25]electronically or by hand that you can now cover with an app. And instead of writing, you can perhaps do it with a voice.
[21:36]Even with your own. Exactly. Sometimes, perhaps after a certain amount of time, you realise that it's not such a crazy revolution. That it's a big change in the medium to long term, but that it's a change and not a revolution that's going to happen now and then everything will be different. Yes, images and scenarios immediately come to my mind, where I can immediately see it in a completely new way, this embedding, especially in coaching and therapy processes, the embedding in the therapeutic process, that it is not replaced, but rather supplemented with a, I say, better diary, which you used to write as a therapy patient. Then write it down. Don't make a big censorship out of it. It doesn't have to be a novel. It's just for you. Just write it down. And that was very helpful. It was also used to some extent in the therapeutic process. And it could be used here if I imagine I have an app as a patient or I give you an app as a therapist and say you can log in to it. You can talk to an AI there. It may or may not even have my voice and I have limited access or I get a summary of it. We will continue to work on this in the next session and work with it. In other words, embedding it is quite conceivable in a completely evolutionary development of therapy.
[23:03]Exactly, and that you don't have to write it, for example, but that you have to document it with your voice, so to speak. Or that you can also document it in dialogue. Or that the doctor also has access to it so that they can look at it if they want to. So it may even lead to better quality, but it probably doesn't solve the fundamental problem. Instead, it can really just improve the therapy as a whole, make it more individualised, perhaps make it more enjoyable.
[23:32]I always call on experts from all professions to get involved because, of course, there are always lay people who have no idea about the actual problems and then try to use technology to solve something that isn't needed. So if, in a certain situation, care is very important, human care, really understanding the uniqueness of the individual person or if physicality is important, then this AI is not the right tool. And experts know best what the right tools are. So where can humans be supplemented by AI and where is it really necessary?
[24:21]
The role of humanity in AI
[24:16]It is fundamentally important that people continue to fulfil this function. We don't need to rationalise everything that has to do with relationship work. We need to rationalise what can be standardised or what is boring, that's where we need to start. In other words, the focus question is not the same, which is also very much fuelled by the question of when the time will come when I will be replaced by an AI because it can do it better.
[24:46]Instead, my focus as a user should be on where AI can support me and, so to speak, expand or focus my skills and make my work easier. That's exactly what I would say. And it's perhaps the same with the one or other person who says, I don't need to go to a specialist, but perhaps they couldn't have used it before. So I think there are still very few people for whom this works. But I am also convinced that in many professions it will really be the specialist who says where it can simplify or improve my work, where it is an additional tool. And it may also be that it is not an enrichment for everyone, but for some it is a huge opportunity that they can also have it or cover it in a different way. And for certain people it's not the right thing.
[25:39]Perhaps the complexity will simply increase, and very often it will because more is possible. More variants are available. And as experts, we must always ensure that this also makes sense from a technical perspective. And that's why I sometimes see the danger here of someone saying from a purely technological perspective that we need to change something now. One example of this is nursing or teachers. And that's usually in the wrong place. So a robot doesn't have to look after the patients, because anyone in nursing would like to spend enough time with the patients. And the patients also want this contact. So it's perhaps more in other areas of care where such systems can provide relief, for example perhaps with bureaucracy.
[26:28]But then it might even be an idea to reduce the bureaucracy a little, wouldn't it? Sometimes AI is not the solution. I was in hospital recently when my son broke his foot and I was in the waiting room for five hours. It felt like the healing room. And I think that could be organised differently, so that patients are at the place where the doctor is face to face in a more timely or organised manner. And then it's good again. So this organisation and the logistics behind it, I think technology could definitely provide further support, even if that's not a concrete proposal for hospital management. I think it's a great example, because I believe that what creates the greatest benefit is the unspectacular. It's not the humanoid robots. Four-legged robots that do my housework are the unspectacular things that improve things. And it's the professionals, we with our expertise, who are best placed to judge that, because we know where we might be understretched, where we could invest our time better. And that's where these systems can really provide us with added value or even a certain degree of personalisation that has simply not been possible up to now, but which would be desirable if it were possible.
[27:46]I think it's incredibly spectacular myself, but formally speaking it's not much of an improvement. But you also mentioned this earlier, this difference from written language, i.e. from chatting to speaking. Chat GPT emerged and at some point I discovered that I could use it directly on my iPhone.
[28:06]And I found it revolutionary that I could speak things in. And that was an unspectacular process, because when I write, I use the same language and write it in there. I write relatively often and quickly and a lot, so I don't notice it that much. But speaking was something completely different for me. In terms of quantity and then also in terms of quality. How do you observe the use of AI tools now that more and more voice input is possible and the format, the medium, has become more permeable between written language, images and sound? Is this something that represents the next generation or stage of technological development? Yes, I think so, the whole interaction with the machine, it used to be that we simply typed something in and now it's actually the case that we speak something and that we generally give commands. We constantly give instructions and the machine then does quite a lot with these instructions. Before, we had to type in virtually every detail or the machine didn't do it itself. That's a big change, I think, and it also corresponds to our form of communication. It's much more spontaneous, faster. Sometimes it can certainly lead to us not being precise enough. But of course it's also easier, because the other thing is a lot of hard work.
[29:31]And we've learnt that and it's incredibly exhausting. And it's also a wonderful relief for a lot of people who have trouble knowing capitalisation, lower case, commas and so on. Many people have suffered from this. That's why I think this communication with the machine via our language and in dialogue is quite a revolutionary new stage and simply always giving these instructions, so now also with these AI assistants that are coming, we also give a lot more instructions and it is then carried out, just like this research, for example, this deep research, which already exists, right? And yes, we simply have to learn to formulate the best possible tasks and to formulate them relatively clearly and comprehensively. And then, of course, there is also more or less expertise that you acquire. I'll pick up on this point because it's really topical and very practical. Everyone is talking about these AI agents and anyone who comes into contact with the topic just a little bit, so to speak, hears that they are coming, that it is now possible. And now let's assume that the listeners say, okay, I'll get on the train now, I don't have to jump on straight away, but I can do it normally, with all due patience.
[30:53]How could I approach this topic, AI agent, me and my business, what can I do? How can I find my way into it step by step, so that I can work as ethically as possible without
[31:14]
AI agents in business
[31:09]accidents or come into contact with the topic? One thing is when I see articles in my media consumption that I read. At the same time, I also think to myself that there are regularly very good videos on YouTube, including current videos or lectures, that explain this a bit. And over time, as we know from ChatGPT, tools will really come onto the market. So people who are now looking into it, I think they are more likely to be interested enough to build something. So they say, I have a need, I have a lot of interfaces in my way of working that are not yet well solved and then they really build themselves an agent that sorts their mail, for example, or writes appointments in the agenda and so on. So it really is a certain degree of automation. But I think we, who perhaps don't go to that level.
[32:05]We might wait until a really simple tool comes onto the market. And then it might not be able to do so much. But then we might realise that booking appointments or sorting emails is actually a benefit for me. I might not need other things. Of course, a lot of things are still unclear at the moment. So what can an assistance system like this do? And from an ethical perspective, this also raises the question of self-determination. So how can I be sure that this agent will do what I really want it to do? Because providers also sometimes consciously or unconsciously have an agenda. So these systems are always characterised by values or commercial interests. And that's why, of course, it can be that.
[32:52]Booking a table is different from an assistant booking a holiday for me directly, for example, isn't it? So I think there are also security measures here, where you can say what is safe to say yes to. And with other things, you don't want to do that or it's really rather problematic. Because we will certainly still have many mistakes and will read about them. But where this makes sense and which people want to hand over which things to an assistant, I think that's still very much in its infancy at the moment. On the one hand, my own provider has suddenly given me a mail sorting agent and I've been able to sort it out a bit and follow the rules that have been set.
[33:46]Customise to my own needs. I found that very satisfying. And I also realised that I was exposed to agents at Deutsche Bahn, for example. Based on my use of this external application, i.e. the DB Navigator, I was given recommendations or offered the next decisions that were, so to speak, automated, agent-like. And when it became clear that I had missed the train, I was offered alternatives without having to look for them, but it had become clear that I needed them. So that also seems to me to be a point of contact, that we work together with agents, come into contact with them, without perhaps immediately realising it. Exactly, one thing is that sometimes you don't even know whether it's just an algorithm, an AI model, or is it really an agent? Sometimes it's not entirely clear, but I also think it's getting to the next level, so maybe we won't be shown a selection based on our data, but we'll actually just be told, do you want this next? That's a danger, isn't it?
[35:00]We now also notice this in Google searches. In Google searches, we used to have different links and we could decide what I wanted to click on and what not. Of course, if all of this is then only summarised, there is a risk of concentration. And of course that gives the providers incredible power. So we'll have less variety. Perhaps we also forget to think critically about what I want to click on, which sources are perhaps more trustworthy and which less so. And in the worst case, it's not support for these agents, but we're actually being pushed, aren't we?
[35:40]
Autonomy and self-determination in technology
[35:41]We are simply being pushed in the right direction. So I already believe that this autonomy, self-determination, is a huge, huge issue. You always have to make decisions when you build these systems. So they're not neutral, they're not objective. And that has always been the case with recommendation algorithms. So I didn't just get recommendations based on my data track, I also got recommendations because the provider wanted to recommend certain books. I simply got the ones that suited me best. It was a mix between my data trail and the provider's interests. The point you raise about the ease of summaries and the lack of contradictions can tempt me to do this.
[36:28]Giving it priority over having to choose the somewhat arduous option yourself. We've seen this very clearly with the YouTube algorithm, including the dangers. If we didn't decide anything, it became more and more radical and it became more and more attention-grabbing, but also really violent. And we were no longer making our own decisions. And that would also be taken away from us now if we were looking for perplexity or chat GPT. We are more likely to be offered something that is already sorted. That makes it easier. And the question is, so to speak, where do I put my own decision-making effort again? That is very, very important. We are not at the mercy of this.
[37:13]This is not a wave and then we just have to work with AI agents.
[37:18]We can decide where we want to be relieved and with which tools or from which provider and where not. I think it will also be a mix in the future. I think this is something that is very often neglected. People have the feeling that if I don't use it, then I'm one of those people who are refusing to face the future. And that's not the case. These are all opportunities to make our lives better, to improve our room for manoeuvre. But that's only in certain areas of life.
[37:47]We want not our whole Life Automated have and pushed become. The makes our Life not better. We must decide, which Variants would like we utilise and which make us simple not better. And there is it also good to know, what make I with pleasure. So when I with pleasure from Hand one Letter write, Why shall I on it do without? When I with pleasure type, instead of in one Machine speak in, then should I the but retain. It becomes a Mix be between analogue and digital, between Stuff, the I itself determine, others, the I one System hand over, because I so that also good Experience made have. I believe, the is also still something Important, these Trustworthiness. So when I bad Experience do, there will I the Provider change. When I say, but, the corresponds to actually something the, like I it handle would like, then have I also none Problem with it. But from the here, I am really convinced, that the one Change is, the now still difficult assessable is. Man says Yes also, that one short term very much overestimated and long-term but even also underestimated. And I believe, the Internet, the Smartphone, the have the insane good shown. We would have never thought, that the Smartphone so much triggers, right? And just as becomes it with AI be. It are not the humanoid Robots, but it are unspectacular Stuff, the itself medium until long-term as but very formative emphasise become.
[39:17]
Decision-making work in dealing with AI
[39:17]And then would be, and the find I also one good Point, at the Conclusion to find, one likes although the Starting point for the Employment with AI in one facilitating Life in certain Aspects see, but the Employment so that and the Selection, what shall me facilitate, the is already Labour and the means Decision-making work. I must me decide, like I me the Topic closer and like I each Step set. And I have to each Step incredible many Possibilities.
[39:51]The seems me Yes also the Aspect to be, why the so heavy is, there in Gear to come. Yes, the can many, many overstrain. I am sometimes also overwhelmed. I believe, the Important is also, something Distance to take. So not each News, that now somehow a Model again something better is or a new Tool and like this. So the perhaps not to strong on itself to the neighbourhood, but something the Big Picture See and say, I wants something know, what running. Ah, now come then so K-Youths for the Organisation or Ah, Text generator, I can there certain Things write. So I plead really in favour, the to observe absolutely. Then but also targeted Experience to make, at for me to decide, what is for me the Correct. because it has nothing with Refusal to do, when certain Things for me none Added value deliver. Mrs Diethelm, many Thanks to for the revealing and informative Conversation. It has me large Fun made. Sincerely Thanks to. Whole sincere Thanks to You. Until to the next Paint and one good Time for You. My Conversation with Cornelia Diethelm from the University for Economy in Zurich to the Topic Ethics and AI.
[41:05]When you these Episode please has and the Podcast you generally appeals, then leave behind but a Feedback and one Star rating in yours Podcast-Catcher, to the Example on Apple Podcast or also Google Business. Recommend the Podcast more and subscribe him, when you the still not done have. For the Moment thank you I me, that you again with thereby was and say goodbye me with the best Wishes. Until to the next Times. Comes good through the Time. I am Sascha Weigel, yours Host from INKOVEMA, the Institute for Conflict and Negotiation management in Leipzig.