Will AI Prompt Us To Build A Sustainable Future?

Is AI a force for good or a double-edged sword? In this comprehensive look, we’ll discuss everything from the ethics of AI to its transformative role in sustainability. Are you ready to understand the complexities of an “AI Sustainable Future?” Let’s dive right in:

with 🎙️ Alice Schmidt, MBA lecturer, Adviser to the European Commission, and non-profit organizations like Extinction Rebellion, Protect our Winters, and Chair of the Board of Endeva e.V.

with 🎙️ Claudia Winkler, CEO, and co-founder of Goood Mobile, Europe’s first B-Corp Certified telecom provider, and a Founding Partner of Adjacent Possible Network.

💧 Alice and Claudia had been with us for their first book “the Sustainability Puzzle,” and are back for an in-depth exploration of the AI world and its potential impact on Societal Progress and a Sustainable Future. Which is the topic of their second book: Fast Forward, written with Florian Schütz and Jeroen Dobbelaere.

Apple PodcastsSpotifyDeezerStitcherGoogle PodcastsPodcast AddictPocketCastsCastBoxOvercastCastroPodtail

Full Video:

https://youtu.be/gb2CcbLDyJI

What we covered:

📚 How Alice and Claudia’s “Sustainability Puzzle” hit the zeitgeist by guiding both sustainability newbies and experts through the multifaceted realm of eco-responsibility.

🔬 Why their new book, “Fast Forward,” hones in on technology, one of the six puzzle pieces from their first book, to spotlight its transformative potential for sustainability.

🤖 What Claudia sees as the agnostic nature of technology, emphasizing that it’s up to us humans to shape AI’s impact on global social and ecological issues.

🔍 Why Alice casts a skeptical eye on the good intentions behind big tech, pointing out that well-meaning sentiments don’t always translate to beneficial outcomes.

🎯 How both authors urge active participation in shaping AI’s direction, suggesting that collective input could steer technology towards a more equitable and sustainable future.

🤖 How Alice and Claudia disagree on the role of tools like ChatGPT, revealing a tension between AI as a utilitarian augmenter versus a potential risk to trust.

🌐 Why both emphasize AI literacy, advocating for the masses to engage with tools like ChatGPT in order to make well-informed decisions.

⚖️ How all four authors tackle the murky legal waters of AI co-authorship, touching on the limitations of attributing intellectual property to machines.

🚀 Why ChatGPT’s viral success is likened to the smartphone revolution, ushering in mass-market adoption of AI technologies and sparking important societal debates.

🎓 How Schools Aren’t Necessarily the AI Saviors: Alice and Claudia tackle the lag in institutional education when it comes to AI literacy, underlining that schools and universities often come late to the game.

🛠 What Lifelong Learning Means in the AI Age: Claudia and Alice assert that AI literacy is not just a school’s job but should be incorporated into ongoing professional training to make employees “future fit.”

🚨 How AI Could Be Its Own Worst Enemy: Alice points out that while AI can revolutionize education, it also poses risks, such as over-reliance on ‘intelligent tutoring systems,’ potentially dampening the human elements crucial for education.

💼 Why Big Tech Needs a Leash: Alice advocates for stronger regulation of large tech companies in the AI space, suggesting that their outsized influence demands scrutiny and governance beyond ‘light touch self-regulation.’

🤖 How AI’s Multifaceted Analysis Outsmarts Human Limitations: Why AI has the potential to analyze complex, multi-parameter problems in agriculture and beyond, delivering unbiased solutions.

💼 Why Regulations Don’t Suffocate Innovation, They Guide It: Alice and Claudia argue that regulation, such as Europe’s Corporate Sustainability Reporting Directive, actually propels companies toward sustainability and responsible actions.

🌍 A Global Playbook for AI? What the hurdles are when governments try to implement AI regulations, especially since it affects numerous sectors and stakeholders, making a one-size-fits-all rulebook a labyrinthine task.

🤑 Money Talks, Even for AI: What Alice thinks is the predominant “intention” in AI: Making money for its operators, thereby hinting at the need for purposeful regulation to guide its application for societal good.

🌱 Optimism in the AI-Ecology Nexus: Why Claudia sees a bright future for AI’s role in sustainability, citing its applications in climate tech startups, smart cities, and precision agriculture as reasons for hope.

🤖 How AI is personalizing Claudia’s life and revolutionizing smart cities, making them more livable and sustainable.

🌊 What Tuvalu’s digital twin reveals: a poignant, almost meta response to climate-induced existential threats that also sheds light on the term “ecological racism.”

🎯 Why stepping into the AI world varies by individual: It’s all about awareness, understanding the bigger picture, and leveraging a human-centered approach for universal benefit.

🗳️ How the democratization of AI and active participation in decision-making could be a marker for Alice Schmidt’s vision of positive societal impact.

🤖 Why Technology is Finally Joining the Sustainability Conversation: Claudia is optimistic about the growing convergence between sustainability and technology agendas, indicating a maturing landscape.

📖 Claudia’s 14-year-old son’s ChatGPT book, Parents being the Unsung Heroes in AI Education, Neom as a double-sided sword, Betting big on Circularity, Playing the Long Game with Conferences… and much more!

🔥 … and of course, we concluded with the 𝙧𝙖𝙥𝙞𝙙 𝙛𝙞𝙧𝙚 𝙦𝙪𝙚𝙨𝙩𝙞𝙤𝙣𝙨 🔥 


Resources:

➡️ Send your warm regards to Alice on LinkedIn

➡️ Then do the same with Claudia on LinkedIn as well

➡️ Get to know the two new brains that worked on “Fast Forward:” Jeroen and Florian

🔗 Check the Sustainability Puzzle’s website where “Fast Forward” is to be found in a dedicated tab

(don't) Waste Water Logo

is on Linkedin ➡️


Full Transcript:

These are computer-generated, so expect some typos 🙂

Download my Latest Book - for Free!


Antoine Walter: Hi, Alice. Hi, Claudia. Welcome back to the show. I’m super happy to have you back with a new book. Actually, I have your two books with me right now. The number one which got you here was the sustainability puzzle. And you just went and ventured into a new field with fast forward, which is a pretty different yet adjacent topic.

So that’s what we will be discussing today. But first, We recorded two years ago and I’ve seen you discussing the sustainability puzzle on several platforms, stages, forums. And I was wondering first, what was your experience about that? How did you see a change? What influence that work had on you and on the people you met?

It’s so

Alice Schmidt: great to be with you again, Antoine. Thank you for having us and for actually reading the second book in no time. I think you’re probably one of the first. I’ll let Claudia start, perhaps.

Claudia Winkler: That’s very kind. So also from my side, thanks a lot, uh, Antoine for having us again. We totally loved our first podcast with you and we’re still promoting it because I think we found some interesting topics there and I hope we will find some interesting topics here too.

I think two years ago when we launched the sustainability puzzle, where we discussed how to use secularity, social justice, technology to create a healthy and fair planet for all. We hit the nerve of the time. Sustainability became more mainstream. And I think what happened then was our book helped quite a lot of people, at least is the feedback we got, to understand the topic, who were maybe not sustainability experts, but were new to the topic.

They got an understanding of the bigger picture. And I think that was important for us to transfer. of sustainability. The book helps them to like step by step get into the topic. The feedback we got is that we achieved this goal quite well without losing people who were already on the sustainability journey and who were maybe experts already on some topics but didn’t have the full picture or didn’t see other areas adjacent to this topic.

This is what made the book successful and this is also why we decided to work on the current book. What happened is that we hit the nerve with the first book on sustainability. And we see a lot of areas, mainly six puzzle pieces. We will elaborate on maybe later. And one of them is technology. And we want to show an optimistic way that is possible to change our future, to get to a positive outlook for our future.

We just have to start acting. Do I get

Antoine Walter: you right here that this new book fast forward is actually one of the six puzzle pieces, which you derived from sustainability puzzle, which means. I have to expect five

Claudia Winkler: more.

Alice Schmidt: Exactly. And we’re going to duplicate the number of authors each time. So that’s going to be fun because we went from two to four, right?

One thing to add, people really like the optimistic approach, right? And that’s something we’ve tried to continue also with fast forward. And some early reviewers have actually commented on that as well. The feedback that’s touched me personally the most. is people, particularly young people saying, you know what, I started a career in business, and now I totally want to go into sustainability.

And that’s why I think there’s impact, because it makes a difference where people work. Is it still

Antoine Walter: a different field? So you would be whether going into business or going into sustainability?

Alice Schmidt: No, that’s you caught me out on that. No, of course it’s not. But if you’re serious about sustainability, you really have to make it a focus.

You really have to understand it. And in these days, sustainability has become such a market, right? A lot of people going. Only for the money. And I personally, and I think Claudia too, right? We’re both very impact driven, find us very sad. And also it’s going to create more problems, I think in many cases than itself, because it’s people that don’t fully get sustainability.

So yes, indeed. We’re zooming in on the one piece of the puzzle, which is. Technology. And I always like to tell people, and then I let Claudia speak again, had he told me that I was ever going to write a book on technology and AI, I was like, are you crazy? And I would not have believed you. And of course, with the sustainability puzzle, we went there because it was important to not forget this important piece of the puzzle, but it’s just one piece and now the sort of hype that’s been created around chat GPT and seeing how the world is impacted by AI made me realize that.

It needs people like me, like Claudia. Claudia’s always been more into tech than I have been, right? But needs those kind of people from other disciplines to actually delve into technology piece of the puzzle.

Antoine Walter: That’s one specific part, which is very important in the book, which you repeat and which you underline in several chapters.

So I’d really like to discuss that. But before diving into that, you mentioned that the book is positive and I agree with that. So let me be the negative guy here. You’re asking at the very end of the book, if AI can help us make our global, economic and political system more inclusive and ecologically sustainable.

That’s a great question, but it wasn’t clear to me that AI had this ambition at all. Do you think AI has this ambition of supporting sustainability and supporting the social and global economic development, or does AI have it? different agenda, if it has an agenda at all. My call

Claudia Winkler: to you is always technology is agnostic.

So technology per se doesn’t have an agenda. It’s the people who create the technology and we all are the people. And this is why we say, maybe it’s not on the agenda yet, but it’s important for all of us to put it on the agenda, to put on the agenda that this technology can be very valuable to solve global issues on a social and ecological side.

And maybe also on the economical side, hopefully it can be a driver for a sustainable future for all of us. But it’s us, the society, the people living in this world who have to shape this future. If we leave AI where it is now, then maybe people who have no interest in doing that form and shape AI in a way that doesn’t benefit all of us.

So I think this is the call we make, and this is why we have been dealing with AI before. So I’m, I’m from the tech side and have been in this field for quite some time, not specifically AI, but technology. What got us working on the book was that. Suddenly people understood something is coming and we wanted to take away the fear of something is happening.

I don’t know. I don’t know what’s going on and wanted to help people understand there are threats definitely. And there are a lot of threats all over the media, but there are also opportunities and we are not just sitting there, but we can get actively involved in shaping the future. And this is why we believe AI per se doesn’t have agenda, but people can shape the agenda of AI.

And the more people get involved, they want to use AI to shape a positive future. Likely it becomes that this happens just

Alice Schmidt: as a tool clearly, but it’s also big business, right? Which means the lot of money thinking date and everything is going that way. And that makes it very, very powerful. And of course, it matters where the power is, because that has an impact on what decisions are taking at the global level for, you know, environmental protection, for social justice, for the economy.

And unfortunately, and I really see this, a lot of people at the forefront of. AI of big tech, of course, which in theory could be neutral is not really that neutral. And I think the difficult thing is that many of them are very well meaning and they really want to, they have a mission and they want to save the world.

And that’s actually what they say, right? But I also feel that a lot of them don’t know what they’re talking about. So they have this trust. that what they’re doing is gonna help society, but they have big blind spots. Yeah. When it comes to other parts of the world, other parts of the population, when it comes to some negative impact.

And another point we keep making is good intentions don’t necessarily lead to good results. And that’s what needs people like Claudia and you and me to co create and co shape and influence that. Yeah. To make sure that we harness

Antoine Walter: AI for good. Talking of you specifically, Alice, you’re hunting down greenwashing.

That’s one of your red threats, which I’ve observed over the past two years since we last spoke. And when I’m listening to Sam Altman, the founder of OpenAI, there are several ways to listen to what he’s saying. One is to think he is legitimately trying to change the world for the better. Another is he’s not quite sure if what he says is translated into facts, but he has good intentions.

So that would be the positive side of it. The negative would be to say he’s just saying he wants to change the world for the better. But at the bottom of it, he wants OpenAI to be the next GAFAM that kills the others. What’s your opinion on that?

Alice Schmidt: I think, uh, from how you phrased the question, we’re pretty aligned on that.

And I think with Sam Altman, I still think for a lot of these people, I don’t know Sam Altman personally. I’m not stalking him or anything. I do think part of him have good intentions. But we also know, of course, that Sam Altman was lobbying against. tougher regulation, right? The very regulation that he was actually in principle asking for and saying was needed.

Backdoor in Brussels with the EU AI Act, he was observed and recorded actually doing the opposite. And that’s of course what we’re seeing with lots of large companies.

Antoine Walter: Doomster and the optimistic disagree. So, Claudia,

Claudia Winkler: if you would see me, you’ll see I raise a hand all the time. I think Sam Altman is an interesting topic to discuss because actually that was one of our first conflicts.

I have a background in innovation, startup technology, and I also ran the Y Combinator startup school. So I have to admit, I have been a fan of Sam Altman for quite some time, and I followed what he was doing. And of course, if you listen to him with this filter, thinking he’s Having good intentions, you think, wow, yeah, he’s, you know, open their eyes, association, and they want to contribute to the good.

You could be easily blinded and say, well, actually he wants really to help the world, and you could fall for that. And I think this was one of the first intense discussions we had. Could or should you fall for something like that? And I think it’s not black and white. I do think he has good intentions, but he is not in a position to be democratically legitimized for taking decisions that affect all of us.

So even if he has good intentions, what is his right to, you know, like shape our future? Really, we watch a lot of future sci fi movies when we develop. the book because I wanted to see what happens and there is this book circle. I don’t know if you know that where, you know, like the team is sitting on the table and to discuss if they should, I think it was influencing the elections.

Exactly. They were like so convinced that they are doing the right thing and the best for the world. But the problem is there, there was no democratic legitimization for what they were doing. And I think it’s the same with big tech companies. Maybe they have good intentions, but who makes sure the intentions have reflect.

the will of the people that are affected by their decisions. And I think that’s the biggest problem. And I think this is how you have to look at it. Intentions alone are not enough, as Alice said. And like, there are not bad guys necessarily that want to like, just make money and kill the whole world. I would not give them those intentions either.

I think the truth is in the middle and we have to make sure that the good intentions in the end translate to something really good. And where we see as a society that this is harming us, we have to get active. We have to call them out, go on the streets, whatever is needed, call for regulation to make sure that, uh, the intentions are really like going to influence us positively.

Antoine Walter: Actually, your book example and movie Derivative is in your books. I could take you on a full sidetrack here and discuss the invisible hand of markets. and Adam Smith discuss if regulations are needed at all, or if the word can auto regulate itself. But I think that would be an interesting, broad, but different topic.

Let me stay for a second on OpenAI because, of course, what’s kicked off AI as a mainstream topic is the inception of ChatGPT. You’re mentioning in the beginning of the book how ChatGPT could have been your fifth author and you Decided not to. And I’m interested in understanding why, but I also thought if you’re openly saying that chat GPT supported you, I might also get some support from chat GPT.

So I asked chat GPT as a podcast interviewer to suggest me questions that I should ask to the authors of a book on how to harness the power of artificial intelligence for social progress and a sustainable future. That’s the exact prompt. I gave him and he gave me 15 suggestions. And I noticed that actually your book answers to those 15 questions.

So I’m wondering if that’s a coincidence or if that was part of your approach to have ChatGPT suggest you what should be answered and then to come with a clever answer. I might have a

Alice Schmidt: different answer, but I think many things I’d like to point out here, but I’ll keep it short. If the question is, did we ever put into the, into ChatGPT, what should that book kind of cover with such a title, did we ever do that?

No, we didn’t do that. I mean, if anything, it shows that we touched actually on the right questions. If you assume that Chattopadhyay also asks the right question, right? We were quite comprehensive and that was quite intentional. And all of us used Chattopadhyay in different ways. And this is also where sort of the doomsday versus optimists, uh, thing comes in again.

I mean, you already sort of called us out. Yeah. You called me a doomsday, even though I never said I was one. I mean, right now we’re much more in the middle, but I mean, I was very skeptical from the start. And when I was quite happy, just, you know, in terms of sort of getting some ideas from Chattopadhyay and I thought that the language Chattopadhyay uses was actually.

quite helpful or the language abilities recently have been super disappointed. So I think there’s some changes going on. One thing I want to point out when you framed your question, you used the term him and he, and actually I’m sure you read this in the book cause you’re very attentive reader. We actually won against.

Antoine Walter: Interesting mistake I made here because when I wrote my outline, I made the effort to write it, but now when I’m raising the question, naturally I refer it’s chatty or little chat guy, or sorry, I didn’t want to cut you off, but just. No,

Alice Schmidt: absolutely. And so it’s great that you actually called yourself out and of course you’re not alone.

And even, you know, we sometimes catch ourselves doing that, but it really changes. sort of the way you, you trust what a machine comes up with. So I think a very helpful reminder is to always think of Chattopadhyay as a computer program, to call it a computer program, to call it a machine, right? And that way you don’t run those risks as much.

Claudia Winkler: As Alice pointed out in the beginning, I have a bit of different approach to that. I think the tool is a great way of augmenting. what we do. I’m not a native English speaker. For me, this comes in quite handy, helping me like avoiding grammatical mistakes. I’m a bit of a mixed English person. Since I’m a kid, you know, I have some dyslexia, I think is the English word for that.

So this tool perfectly helps me to get my writing in order, helps me get faster, get more efficient. I’m running a digital impact business and we use these tools a lot. in order to make us faster, in order to help us getting things done that are cumbersome for us. For me, also in the writing process, CHAT GDP definitely assisted me when writing.

We had a lot of conflicts on that one because Alice had a different approach. She has been living in English speaking countries for much longer than I have. So for her, she had different issues. But I think if you look at the two, Everybody can take out things off the tool that will help him or her to augment their way of working.

That’s a call I want to make because I believe it can democratize knowledge. I have a story about my 14 year old son. He published a book with ChatGDP, A Mid Journey, within three days and put it online on traveling. And his English improved a lot with working with this. It’s like a tool that helps.

really everybody to get work done. And yes, it has limitations and you cannot use it for quoting. You have to be aware of the content it’s using, but if you know the limitations, then you can use it in a way that it really augments your work. And I believe this helps us as a society to have a. better output than we had before having those tools.

But this is something where we have very different views. It’s okay to have different views on that. And everybody should use the tools the way they suit them. But before you can make this decision, you have to try the tools. You have to work with them in order to see if they work for you. And this is why we have this call for AI literacy in our book.

Get informed, get involved, try the things out there so that you know what you’re talking about and can make informed decisions. if you want to use this tool. Claudia,

Alice Schmidt: and it was you looking into this at the time when we actually thought of making Chattopadhyay a co author. When we still thought we would use Chattopadhyay to actually write a copy, which in the end we didn’t.

And that was that you couldn’t actually legally make a tool like Chattopadhyay a co author. Going back to the him or her, you know, what is it?

Antoine Walter: Actually, you’re touching on something which has been very trendy when chat gpt came out you had allegedly every single youtuber has made a video about i interviewed chat gpt or chat gpt did this chat gpt did that i have one of my good friends in the water world walid khoury did a live asking ChatGPT, what are questions and seeing what the answer of the tool was, which by the way, showed that if you’re starting from scratch, it might be helpful that it’s an assertive layer.

And at some point it’s simply very vague, but because it’s a very specific topic. So ChatGPT is one part of the equation, but you mentioned this AI literacy, Claudia, and I think that’s a big element in the book. You’re talking of a lot of artificial intelligence topics. I mean, ChatGPT. Is a part of the book, but a small part of the book because reading you and also what I see in my daily life, ChatGPT is only a small side of the AI equation.

So do you think ChatGPT is anecdotal or is it overbaked because that’s the one we can Speak with, what’s your opinion about the size Judge G PT takes into that conversation? You know, it’s

Claudia Winkler: interesting because we started out and wanted to write about generative AI and we ended up writing about AI in general because we found out there is much more to we we knew.

But you know, like it was an important to us that the big picture is much bigger than if you do just focus on j d I think what was interesting on church GP was that suddenly, I don’t know, the preschool teacher of the kid of my neighbor could talk about AI because she was aware of it. Before she was not aware of it.

This is the specifics of chat GDP, even more than mature and your stable diffusion or these picture things, because everybody can easily like take out the phone, try it and see what it does, get a feeling for what’s doing and understand what impact this could have, it is the fastest growing consumer app.

It had millions of users within, I don’t know, the first two weeks. We have it in the book exactly, but I forgot the facts. I have to admit, but it was one of the fastest. We cannot say it’s just a sideline of history. It’s definitely something that changed our way dealing with technology.

Antoine Walter: What you’re mentioning here is it’s.

specifically, or is it AI? In the book, you compare it to computer technology. You compare it to the wheel, you compare it to the space race. You compare it to

Claudia Winkler: all this. It’s AI in general. In the book, we talk about AI in general. If you want to compare chat GTP and generative AI with a recent development, then compare it with, I’m from the telco industry.

I have a background in the telecommunications industry for 20 years. Compare it with the rise of smartphones. Suddenly everybody had access to a little computer in their hand and could use it with an interface that Apple created. easily. So it was democratizing access to technology. This was there before the technology for video calls was there already in mid nineties.

Nobody used it before the smartphones came. And I think it’s the same here. Generative AI is the thing that enables the mass market adoption of AI tools in whatever way. The bigger picture is AI is changing our world, not AI is just a part of the whole picture.

Alice Schmidt: Regarding Chattopadhyay, I think the fact, first of all, it’s a multipurpose tool.

And secondly, it’s very. The way it answers rights, people fall in love with it. People commit suicide on it, sort of recommendation. And that’s what’s made it successful with millions and today billions of users. So it is actually important that we do talk about Chattopadhyay, even though there are others, and I think all the other applications often are much more specific and they’re very important, but.

They’re not so impactful at a very large level. We can talk later about, you know, how to protect biodiversity or prevent a wildfire with AI. Right. And I think these tools are. really, really important, but they’re not going to be used by a huge number of people, right? And that’s why I think it actually is important to give attention to tools like ChatGPT.

I think

Antoine Walter: in the book, you make the analogy to the iPhone, like you just explained Claudia, and the iPhone comes out in 2007 and IBM produces the Simon in 1992 or 1993, which has the same capability, but isn’t as well aligned with the times. ChatGPT has this. Entropoform, oh,

Claudia Winkler: entropoformizing, difficult word, I always,

Antoine Walter: it has this, this human characteristics, arguably Siri, Alexa, and, and the likes had that as well, but.

weren’t aligned exactly like ChatGPT was. What is this one specific thing about that one? It could be Google Bard, it could be whatever other there is, but the one which popped up is ChatGPT. What makes that one

Alice Schmidt: special? I’m just observing here that, and I actually thought about it last night, that we’re moving from female connotated tools like Siri and Alexa into the male domain.

And I was wondering what that means in terms of power. Yeah, I mean, that’s. That’s an aside. I couldn’t comment on Chattopadhyay. I mean, I think it came, it came out, it was the first one to be released on mass and actually touching a lot of people, but I couldn’t personally compare it now to Bard and others.

I don’t know if Claudia has

Claudia Winkler: that. There was an interview with Sam Altman and even he was surprised by the huge hype it generated. Some things generate types, you know, GDP has been there before, but the chat interface wasn’t there. And this changed basically the things. It was just like a first hook of what’s going to happen.

And you mentioned Google Bart and the integration of generative AI in basically all tools. AI is everywhere in all tools and it will not be like they want to the other switch the world is AI, but AI will gradually or is already for years, gradually coming into our lives is there in the background at many, many, many applications.

And JetGTP generated a lot of attention because it was a way to interact with AI. And the first time I think. people were able to interact with

Antoine Walter: AI. I would still like to pin down the AI literacy because to me, there are two different ways to look at it. I’m personally using AI quite a lot to enhance my podcasting setup, which is a one man band.

By the way, ChatGPT is maybe the one I use the less. I’m not a big fan, but that’s not my point here. My point is, I’m using AI quite a lot. Mid Journey has been my partner in crime for a whole lot of stuff. Yet, I wouldn’t consider myself AI literate at all. I learned a lot about AI literacy in your book.

And to me, your book was a good entry into this AI literacy world. So how do you get these two levels? One of your advice is people shall use it so that they understand what it is. I’m there, but then people should get literate about it. And aside from reading your book, I don’t get how to do that. I think it’s also

Alice Schmidt: about losing.

Fear, right? In the book, we always distinguish between techies and non techies, right? And of course, it’s an oversimplification, but I’m clearly a non techie and I probably will be. And it’s really losing this fear. Technology, because it’s something, well, technical, right, by definition, is something that really puts a lot of people off.

We know how hard it is. to get girls and women into sort of the STEM field, science, technology, mathematics, engineering, et cetera. Developing AI literacy initially means overcoming this and just engaging with it. Personally, I see it as a sort of gradual way. You, um, Antoine, now he’s using several tools beyond JATCPT.

You’re probably on a quite literate level and I think ideally everyone should get to that level. Of course, this goes hand in hand with broader digital skills. It’s not just about AI. And in fact, we often can’t isolate the AI component of some kind of a model of, of a tool. Literacy is very much about using it, knowing what it means, but it’s also about understanding the risks.

It’s understanding the opportunities. It goes back to basic things like what are my sources, who says what, how to sort of critically analyze what I’m doing. And also the main thing we’ve been emphasizing since the sustainability puzzle, this bigger picture of what does that mean, not just for a specific solution, what does that mean for my life more broadly, for other people’s life, for us now in the present, but also for us in the future.

When we talk

Claudia Winkler: about AI literacy, we don’t just talk about Tools using tools. You’re that’s the way to get into it, but especially understanding the risks and opportunities. And they are quite generic for technology overall. So it’s like, just there can be negative side effects. Just check them, be aware of basic things and in with the eyes.

Specifically, you have this topic of bias, you have this topic of trustworthiness and all these things. And just be aware of a few facts when you use these tools, not just use the tools and think, wow, it’s cool. I’m a person who does that, but then I have to hold back and say, okay, hold on. So what’s the data that’s used and what happens with my data?

Just be a bit aware of. What you’re doing when you use these tools. This is what we also call for when we call for AI literacy. So it’s using the tools, but being aware where the data comes from, what happens with your data, be aware of the risks involved also.

Alice Schmidt: You’ve read the book. Congratulations. We want everybody to read the book.

And I really think that would be a book like this, right? And I think that would help. And we’re very happy to actually turn this into children’s versions and zoom in on this or that. But I think in a way, that’s it, right? We need the videos. We need the cartoons. We need all of this. And at the moment I think we’re in a niche, right?

I mean, this hasn’t really happened. So yeah, people should

Antoine Walter: get our book. But that’s exactly my question here about the how, because I’m a water engineer. If a new water company comes out and has a new say on technology, I can refer to three centuries of literature and documentation about hydraulics, water treatment, and so far and so on.

If someone comes with a strong assertion on AI, honestly, I have your book, a couple of YouTubers I know, and a couple of newsletters I’m reading, and that’s everything I have. How do I get to AI literacy? I think a

Alice Schmidt: very practical thing is to integrate it. With the growing number and also the growing quality, I guess, of media literacy trainings, think of schools, my daughters now learn that kind of stuff and that should just be part and parcel of it.

And, and, you know, we all know that schools and universities are developing AI policies. They are. Partly very much coming from the sort of TTPD kind of risk as they see they’re not a lot of them city opportunities. Others are already teaching it I’ve heard cases where the ICT instructors are really encouraging students to use it with the result that then the Language teachers aren’t so happy because the students are actually using it to write their papers.

I really see schools Clearly having a really important role, but of course it’s also the non formal education sector.

Claudia Winkler: Sometimes you lose people if you talk about AI literacy and learn it in school, because I’m not in school anymore. It’s lifelong learning. It’s you, yourself, it’s companies that have employees that want to make them future fit, that should invest in giving them basic trainings in AI as they should in sustainability, digitalization, everything.

So it’s part, I think, of common knowledge, common sense. And Alice always mentions the Finnish example. So Alice, maybe you want to share that one because it’s, it’s really nice. Quite a few years

Alice Schmidt: ago, so long before TRTPT came out, I was saying we need a significant part of population, I think it was 1%, to be really trained and engaged.

It is something, of course, that governments need to do much, much more of, and again, they can integrate it with their, all these drives to get people to pick up, you know, STEM education. STEM jobs, et cetera. Since we launched the book, we’ve had companies, institutions also approach us and asking us to at least speak about the topic.

Because for many, this is really, really new. And in my bubble, right, a lot of people are more like from the sustainability landscape. But this morning I was talking to a techie. She’s a classic example of someone from that different bubble, right? From the techie bubble that is very appreciative of having someone dissect a bit, the impact.

positive and negative. I think that’s

Antoine Walter: the part of the book where I’m a bit less optimistic than you are about the role of schools and universities. It’s a discussion I had on the microphone with Paul O’Callaghan about the adoption curve of innovation in water. So it might be water centric, but what he’s showing in his research is that universities are the laggard.

They are waiting for everything to be adopted in this Gaussian curve. And once it is really validated by 90% of the population, then it would go into education. So I have a hard time to imagine here the same. school system being at the forefront. And actually the example you give in the book is also what came out in the press, which is this element of students are cheating using ChatGPT and writing their essays.

Honestly, I’ve been a student when Wikipedia came out and we were doing that, but just it was Wikipedia and

Claudia Winkler: not ChatGPT. I’m with you on that one. This is why I mentioned this lifelong learning. And it’s also via parents. I don’t know if we mentioned it. But the book was written by Alice and her husband, me and my husband.

So basically we are four people and we are parents and we have, Alice has two daughters and I have two sons. And one of the goals for us writing this book and researching was also being able to help our children to get literate. And I love to share this book on dinner discussions with my non tech friends that are parents of my children in school.

And it’s very, very important that this discussion happens because I can, with them, discuss what impact it might have on their children. They might not be techies necessarily, but if I talk about it, they get an understanding and they can help. their children to get literacy. It’s not just a school system, but also like if the kids are at home talking about, I don’t know, AI tools, a new fake news they found somewhere, which are often, you know, like done by bots.

We recently heard something, Alice, you were there, right? When somebody told us nowadays, uh, the fake messages you get via AI that, I don’t know, your kid is in danger by the police. It can synthesize the voice of your kid. If it’s somewhere in the internet, make the message sound like your kid. And the kid asks you for money.

and stuff like that. And we need to be aware of this. So this is why we as parents need to talk about this. We need to discuss and have a lot of discussions with my 14 year old, if something is fake or not fake on the internet. And with AI tools, it’s much, much harder to identify fakes than it was, was there before.

And this kind of literacy, we as parents also need to share with our kids, or we need to be aware of that. And that’s our responsibility as parents to support our children to code and not just. Relay on the school system and wait to the school system and I’m also a bit skeptical about the school system like you, Antoine, you know, like finally moves.

Finally, five years later, they have sustainability education at universities. This was not there like five years ago. Now it’s there. So hopefully same happens with AI literacy. And as long as it’s not there, we as parents have the responsibility supporting our kids.

Alice Schmidt: Never as fast as technology, right? So technology is always faster than education, than regulation.

So I think that’s clear. And universities are particularly slow. It is interesting, Antoine, that you find that the chapter is quite optimistic because I actually led on it. I’m, as we know, a doomsday or a critic. Um, I think one of the things that is clear in education and has been clear for decades is Education is also a big business.

A lot of companies developing so called solutions are really not solutions to an actual problem. They’re just there to make money. And so we are facing this huge amount of so called intelligent tutoring systems. some of which are great, some of which are excellent in specific situations where you say, you know, for example, teachers are lacking.

Some of them are wonderful in the back office, but many don’t actually help that much. And education is so much about people. It’s so much about human interactions, about social learning. It’s our future. So we really can’t leave these two machines. And at the moment I can see clear risk. that in the book, we would describe AI tools that are only necessary because AI creates the problem, for example, where it asks teachers just to stay at their desk so they can monitor their dashboards, which give them information on students, concentration, brainwaves, whatever.

So that’s not a situation I want to create. I mean, I think that clearly are benefits and opportunities, but again, you need to have goodwill and you need to measure results and they’re not going to materialize. by themselves. Claudia made a point on lifelong learning, and I think in principle, that’s a great one where AI can help, right?

Each of us having our little instructor or teacher in our pocket, and this teacher, instructor would know our entire CV, know exactly everything we’ve ever sort of learned, understood, um, read. So that could be very, very helpful, but of course it opens up other questions in terms of privacy, safety, et cetera.

You’re

Antoine Walter: mentioning privacy, safety. That’s actually the second red thread, which is in your book, which is how do we deal with the regulation of AI? And you highlight something, which it’s the kind of thing, you know, But you never really notice. And once you’ve highlighted it, it’s like you see it everywhere.

Those big companies are bigger than most States in terms of their, their size and influence. So is it again, very optimistic to think you can regulate them? How shall we regulate them? Shall we? regulate them at all? I’ll start

Alice Schmidt: on this. In the research for this book, one of the messages that stuck with me the most was the era of light touch self regulation must end.

And that’s something that Masukato, who we’ve been citing again, but also Gabriella Ramos from UNESCO has been saying. So at the moment, not just with AI, but with other things. We’ve been trusting large companies to self regulate, to make voluntary commitments on sustainability, on, you know, a lot of dimensions that actually affect all of us.

And they get away with this because they are so powerful, but also because they know how to play governments. They know how to use the right words. Once more corrections, they’re not more powerful than most nation states. But some of them, right, are more powerful than very, very many nation states, particularly if you look at financial power, if you look at data, but also if you look at followerships, client bases, it’s really, really, um, important.

And this power has been concentrating over the past few years and it’s concentrating even further. Having said that, governments and some governments more than others still have decent level of power. We do see that the AI act in the European Union, but also other initiatives. in the U. S. for example, are moving and they are getting there.

And at the moment with the AI, for example, the big question is, does this risk based regulation actually still hold? So it’s basically coming back to a technical question, how to best regulate AI. And Chattopadhyay is the classic example, right? Because it’s a multi purpose tool. It really depends on what you or a given user does with it.

Whether this is a high risk application or not. And we know that big techs, we’ve, we’ve talked about them often. Um, aren’t really in favor of this regulation, even though they’re telling us they are. And I do think regulation has a point. I think it is not completely unrealistic, but when I am confronted by companies telling me, Oh, we need more regulation.

We want more relation. I’m very, uh, skeptical and it is important to keep reinforcing government’s power in a way

Antoine Walter: as well. Just connecting that we discussed the AI literacy, you’ve probably seen the audition of the head of TikTok in the U. S. by the U. S. Senate. And we had seen the same scene when Mark Zuckerberg had been in front of that same group of people.

The questions they asked made you realize that they simply had Zero clue about the topic. Does TikTok access

Alice Schmidt: the home Wi Fi network? Only if the user turns on the Wi Fi. I’m sorry, I may not understand the question. So if I have TikTok app on my phone and my phone is on my home Wi Fi network, does TikTok access that network?

You will have

Antoine Walter: to access the network to get connections to the internet, if that’s the question. And that is social media, which is known for a bit longer, almost 20 years. How can you expect governments to come with the right regulation and addressing the rights? risks when AI literacy is in its infancy?

Um,

Alice Schmidt: it’s a very good question. Um, I think that the challenge isn’t so much a technical or technological one. And in fact, when I’ve heard government representatives speak, for example, at the Alan Turing conference in the UK, a lot of them felt that several aspects of AI regulation were actually already covered in existing regulation.

Yeah. So I think Technically it’s possible it’s more that it covers so many different fields and sectors that governments within themselves and clearly governments with other governments find it very hard to agree. Policy and lawmaking at the European Union level touches on a lot of vested interests and so that makes it very difficult and I think there’s this added complexity perhaps that AI regulation only really makes sense if we standardize.

and globalizing. We don’t want each country, each region to have its own rules and standards. That would be even, even harder, um, also for, for companies to then follow them. So I think it’s this multidimensionality that makes it so hard rather than those technical questions that they’re grappling with, because in the end, you know, they, yes, governments may not be fast.

They may not have all the experts in need, but they can get them in and that’s what they’ve been doing.

Claudia Winkler: Alice touched on, on, on many, many points. I’m not that optimistic that you find a global regulation on the issue, but I believe that there are some good regulation, regulatory efforts, like, uh, the UII Act, and we had a lot of discussions on that.

Is it stopping innovation? Is it a competitive global disadvantage if we go in there? I believe as a society, this helps us more than it harms us. And this is one of the conclusions we came. to in our book, and it was a hard reached conclusion because Flo and me, the tech optimists, were in the beginning rather against regulation.

But coming from the business world, I do know nothing happens if there is no regulations or the self regulatory stuff. We will only do if it benefits our companies normally in the business environment. But whenever somebody says, is it possible to regulate or not? And can we do things that we as a society want?

Somebody came up, and I think it was Jeroen, Alice’s husband, who’s a cell biologist, to draw the analogy of Dolly, the, the sheep, the clone sheep, we as a society could probably technology wise already clone more than we do now, but there are clear regulations not doing it. So if we achieved it there, why not achieve it on other issues that are important to us?

And I’m not sticking now with the regulation, but for example, the European green deal is making good progress. a lot of things that are moving our society forward. And so I want to end this discussion, regulatory discussion, a bit of an optimistic note. If we all want to have it, it can be done. And the clone ship is a good example that it can be done.

Maybe this is an inspiration for all the other rules and regulations, and maybe it’s not best for us as a society. In Europe to always be the most innovative ones. Yes. We will lose out on innovation a bit because of the eye regulation, but on the other hand, we may be safe to our European society of a lot of problems that might occur if we don’t take this route.

Alice Schmidt: Also to make this a bit more optimistic. I mean, what we’re seeing clearly in Europe now with the CSRD, the Corporate Sustainability Reporting Directive, we see that this is a game changer because the very same companies that. Might not have been making substantial efforts towards sustainability until recently, even though they were aware of the business case and they knew, you know, things were going to come, are now falling over themselves to really tackle this.

And because they now have to be much more transparent. There is a question of responsibility. All of a sudden CEOs have to sign off the data provided. Auditors have to actually, well. audit. This is happening, at least in Europe. And I just want to note that this, of course, has a geopolitical dimension too.

And Claudia touched on it because there is this fear that Europe with all these regulations is going to lose out. But I love what a student of mine once said. And I think basically they were talking about the enlightenment age and saying, well, you know, at the time Europe was also very different, but the world followed in Europe’s track.

And so I do hope that it will follow

Antoine Walter: again. I like your optimism. And I feel bad because today I’m really the one which is dragging you with, uh, with negative thinking. But you, you mentioned Dolly and what’s the incentive there is also sure regulation has probably prevented us from cloning more, but cloning was more expensive than breeding.

So there wasn’t really a strong drive for pushing for cloning here. Yeah. You can. win a decisive edge if you have augmented people by AI. I mean, you have this example of the climber who is augmenting himself because he lost his, his legs. And then you cite people that, that say they almost consider amputating their own legs because bionic legs could be faster.

It’s a metaphor, but if you use that metaphor, you might be thinking someone with the wrong intentions. There’s a good incentive for. going faster than whatever regulation is going to is going to bring. I don’t want to sound too much of the doomster here and I don’t want to drag it down to that again.

You mentioned sustainability, let’s be positive. That’s your chapter six in the book and for all the ones like me which enjoyed the first book that was the one which has of course resonating the most with the first book. You show plenty of examples of great users and and potential for AI. to support us there?

Which brings me a bit back to my very opening question. Does AI has the intention to, to support us there? And how will it? I

Alice Schmidt: mean, again, I think AI doesn’t have an intention per se, right? It has to be actively harnessed. And perhaps there is an intention in AI that’s, well, that’s making money. Most actors mostly have this intention the most.

No matter what they say. So that’s, that’s clearly that.

Claudia Winkler: I jump in here. So for me, the best example, if AI can help us or not, and if it does more harm or bad is one of the topics we touched upon in our book, the ecological chapter. I mentioned in the beginning, I think before we started the recording, that we had a lot of discussions on the footprint of AI.

And we were like, the doomsters were all saying like, wow, the footprint of AI is so bad, but this is minimal compared to the overall. CO2 footprint of other things, and the positive aspect, you can use AI for an environmental area much, much bigger than this negative aspect. For example, think of climate tech startups that measure the impact of AI.

Think of the CO2 footprint of buildings. Think of creating smart cities, think of, uh, agriculture where you can use precision agriculture, where you can use AI to be more environmental friendly and sustainable with fertilizers and stuff like that. There’s a lot of positive aspects there. And I think this is what you wanted to touch upon, Alice, and I’ll let you.

discuss this one further. And there are a lot of positive aspects there. And those will definitely play out because there’s a market there. This is one of the biggest, uh, sustainability businesses going into food industry, making it more sustainable, going into counting carbon footprint. There is money there because there is a threat there.

This will help. push the technology in the right way. Also energy efficiency, most important. I’m sure in your industry, there’s a huge topic there. So there is a lot of fields where I will be used for the positive of society because there’s a

Antoine Walter: market there. To rebound on your agriculture example, and then I’ll let you speak, Alice.

To me, reading the book, it made me think of, of a discussion I had several times with several people within this water agriculture nexus, which is, there’s a study by, uh, Federico Doderico. I think, I hope I’m not butchering his name. Who’s just. showing how we’re not growing the right crops at the right place, and that if we were growing better crops at better places, we would be solving part of the yield issue of agriculture.

So that’s one part of the equation. The next part of the equation is if we were to do that, we would end with parts of the world which are producing just one crop. And then that’s not The best mix in terms of human interaction. So you would have more transport because now you need to move those crops around.

So there’s a positive and there’s a negative. And as a human looking at that is complex because you don’t know how to wait those stuff. And that’s where I see a clear perk for AI, which is AI can look at those multifaceted problem and then. find the right interpolation and come with something which is depassionated as a result.

So that’s a clear positive for the use of AI, which you’re also highlighting in the

Alice Schmidt: book. Exactly, because I think we humans, I mean, we are very good to four or five parameters, right? And beyond that, we’re not good at actually factoring this into our decisions. Of course, we do intuitively a lot much better than machines can.

Just to give an example of how people are still better than machines. We have this cover beautifully developed by Florian and Claudio’s husband. You know, we had sort of decided together what we was kind of wanted to be on this cover. Um, and then it was a lot of prompting and iteration and again and again and again.

And only a couple of days, I think before publication, did we realize that the human hand was actually showing a middle finger and I think it’s one of those things where and we actually debated is that a good thing first of all we were like oh my god how could this happen but it’s one of those things where it’s totally clear to a human but it’s just not clear to a machine it hasn’t been prompted on not showing its middle finger in that kind of image.

And we even said, Oh, perhaps we want to keep it. It could cause a shit storm. That might be good for us. Uh, we didn’t in the end. So thanks to Florian and we have another one. On the ecological impact, just two other things. The good news is that because we know the factors that cause the sort of negative footprint, right?

Of AI, you can do something about it. So you can actually minimize or reduce. Energy use by choosing a location. We have this example in the book of if you run your service in Australia, they take 73 times as much energy as if you were doing it in Switzerland. It’s a question of time of the day. It’s a question of a few other things.

Of course, the size of the models actually matter, right? So, um, less might be more small, might be beautiful, but then the actual, the, the most important aspect or part of the footprint actually comes in the hardware. Do you want to do. a full kind of life cycle assessment. That’s important and the e waste of course

Antoine Walter: is part of that.

So to me the outcome of the book is you should relocate all the activities in France. And then you’re good, right? That’s what you write.

Alice Schmidt: Yeah, that would be good

Claudia Winkler: for Europe. Yeah

Antoine Walter: On the agricultural side of things. There’s one example in your book, which is this soil pros Which is to me one of the most interesting you you showed I was also very curious about the chess one But I thought that’s curiosity.

That’s not real interest But here on soil pros, I looked a bit more deeper into it and I really think it’s a very interesting approach to this multi Parameter it’s the perk of AI compared to the human brain. What would be your Claudia Alice, each one, your preferred one, if you have examples within your book or beyond the book, what you found in your research, which you want to share

Alice Schmidt: today?

I think one of the interesting things about the soul pro as one is that it’s a tool, but it’s also a multi stakeholder collision in real life. In the end, it always depends on who uses this for what purpose. You know, how broadly do they think, how do they bring in their joint expertise? How willing are they to learn about the impact?

So that’s that personally. I mean, I like mentioning two examples in the sort of broader environmental space. And one is actually a wildfire detection service because it has a clear, very clear appeal to people. The name is. Umbral mayo. Yeah. And I’m probably pronouncing this really wrongly. What this means is 1.

5 degrees. Yeah. So the idea is that this reduces, um, carbon emissions that arise due to wildfires. It’s quite beautiful because it detects emerging wildfires within, you know, seconds. And it then also helps. to track fire engines, et cetera. So there’s a very key business case for that. The company’s, I think based in Brazil, don’t quote me on this, not a hundred percent sure.

And they are actually very successful also financially with this. And I think that’s a great example. And the other one I, I really like because I’m more and more into regeneration, um, regenerative farming, restoration of ecosystems. And I like the captain tool, which actually. stands for something like, um, area identification with AI.

So it helps you decide where to invest your scarce resources. And in a way that maximizes biodiversity, yeah, in a given sort of place, because humans often tend to focus on, you know, specific. Species or hotspots that actually in the bigger scheme of things may not the ones where you want to concentrate your efforts.

So again, it’s a matter of, of AI being very smart in terms of helping you

Antoine Walter: choose this. I liked it as well as the book, the biodiversity one.

Claudia Winkler: I’ll make it very, very short. I think, um, So, uh, I like to use AI for my personal life and I think everything around smart cities, making cities greener, smarter, protecting our cities from climate change is the examples of AI that I like most.

I don’t have a specific company or a specific example, but there are lots of things out there. And, uh, especially if you look around there in many cities across the globe, there are interesting examples where AI is used to make the cities smarter and more livable. And I think this is something that I like to point out.

Antoine Walter: You opened the book with the line and Neom, so I guess that’s

Claudia Winkler: what could be done. That’s a, that’s a bit, uh, that’s an example that’s a bit, we can discuss that because we see a lot of, uh, we see a lot of negatives on that one, but of course there are interesting aspects. So I would not take the example of Neom to share it and say, well, that’s a cool example.

It has some futuristic aspects that are quite interesting and a bigger picture of the book. The story is not that nice, but there are a lot of smaller examples in cities all across the globe that are using AI that are quite quite smart.

Alice Schmidt: We had a lot of discussion about, should we even mention the Neon example, right?

Because we know that there’s so many potential issues with it. At the same time, it’s a very clear, deliberate exercise of using AI for a. city well for the, for an entire country, even, and using it both to maximize sort of human benefits as well as, um, you know, ecosystem benefits. At least that’s how it’s intended, right?

And that’s how it’s communicated. And it is visually a very powerful and fascinating example, right? This is going to fly. How it’s going to develop is a different story. I do want to mention one other thing, and I completely agree with cloudy cities are fascinating because I mean, they bring together basically all the issues there are, right.

And they bring together people, talent, everything, and digital twins, right. Off entire cities, of course, are one manifestation of this. And recently to value, you know, a country that’s basically very much. threatened by climate change, by rising sea levels in particular, needs to find a plan B because they’re basically going to be submerged underwater, right?

In a few decades.

Antoine Walter: It’s what you call ecological racism in the, in the book, if I’m right.

Alice Schmidt: No, I’m not actually getting towards that. That’s a good one. Well, yes, no, you’re right. I mean, yes, absolutely. It’s related to that, right? So it’s the people and the countries that benefit the least from AI actually suffer the most from the environmental destruction and climate damage that we’re causing.

That’s true. And yes, and Tuvalu is one of the countries suffering from that. But the solution again then is in the eyes. It’s partly about Relocating the people. But you know, it’s not good enough to just relocate people. A nation wants to continue its cultural heritage, its language, it’s, it’s, it’s everything.

And so they’re basically creating a digital twin, almost like a metaverse of the country. So like, and a virtual copy of the country so that people and history and the culture is gonna be preserved. And in a way that’s a sad example, but it’s also a beautiful example.

Antoine Walter: Bittersweet example. Maybe very well put.

Thank you. I’m sorry because, uh, we, we, we are already going. Far above the allocated time and I can. Keep that one on for a while. I have so many additional questions. I really invite people to read the book because they will understand why I have many more questions. It’s an interesting read, but it’s also eyeopening on many topics.

I really appreciated that part. Let me go back to my opening statement that ChatGPT has helped me with the preparation for that conversation. So let me raise the question that it, not he, it proposed to me. The chat GPT asks, how can individuals and organizations get involved or take action based on what they’ve learned in your book?

Claudia Winkler: I think that’s a gradual process. So it depends on where you, where is your starting point, how far are you on the journey? And you know, like everybody’s on different stage. So for those who have not. been in touch with technology or I at all, like take the first step, try to understand the topic, try to understand that something is happening out there for those that are involved, try to see the risks and opportunities.

Look at the bigger picture. And for those who shape our tech future, who are really developing the services, make sure that you develop the services with a human centered focus that. takes, you know, like into account the well being of all people, not just a few privileged ones, and the well being of our planet all over the globe, not just in certain areas.

I think this is the gradual thing we see. So if you’re not involved at all, start understanding that we face a problem. If you have a bit of an understanding, try to understand the bigger picture. And if you are the one currently shaping our tech future, because you have the skills and the abilities to do so, make sure that you develop services in a way that they benefit all of us.

And you cannot do that on your own. That’s also totally clear. We wrote this book, four of us with different perspectives, because we have so many blind spots. If you develop services, and I’m developing digital services for 20 years, make sure that you get different perspectives in the picture, make sure that you get people.

in the discussion that are critical, even if it’s cumbersome, painful, and you don’t want to do that. But if you don’t do that, you might end up creating the reverse future than if you would not have done anything. I think this is our call to

Alice Schmidt: action. On the why, right? I mean, why would they get involved?

Well, because they are, it’s about their future. It’s about co creating a future that whether we like it or not. Has AI as a big, as a big, um, sort of element, and we all need to know how to use it more effectively and safely. And maybe just to bring one other example, you know, how we, for some reason, trust chat TPT again, perhaps anthropomorphizing it a bit with our deepest secrets.

Right. And we ask. To ask it to write love letters. Companies have been very, very scared and punished to, to an extent by the fact that employees give away company secrets, which are now basically owned by OpenAI. Uh, or, well, there’s a debate about who owns this. So I think it’s, it’s really self-protection and that reminds me a little bit of, of the whole sort of climate action, environmental action.

We need to keep explaining to people and making sure everybody understands what’s in it for them. We’re not doing this. for someone else. We’re doing it for us, for our own future, for our children’s

Claudia Winkler: future.

Antoine Walter: Which leads me to my closing question on impact. You’re doing for the future, at what stage in the future can you look back and say, we had a positive impact?

What’s your ambition

Alice Schmidt: there? For me, the first time that comes to mind is democracy. It’s really if and when we will have more people really taking decisions and involved in these decisions. And when we see many more companies rather than a few large companies, I think that’s one of the, one of the aspects.

And we often hear from big companies that AI tools, so I sort of supported tools are democratizing the business world when, and if we achieve that. I think that would actually be a great indicator. I’m not saying this is our only goal, right? Because our goals to me are more in the natural social justice world, et cetera.

But I think it’s going to be an indicator of whether we’ve managed this or not. I really believe in democracy because I think people, and we know, for example, that more democratic systems are more likely to take climate action. So that’s a good

Claudia Winkler: indicator.

Antoine Walter: You want to add something, Claudia,

Claudia Winkler: on that one or?

I would take it even a step back. I think it’s already a big achievement if more people talk about the topic to bring it on the agenda to make sure that it is, you know, it can be democratically discussed. The impact we want to make with this book and not in general as a society is just bring the awareness for the topic to more people.

Well,

Antoine Walter: thanks a lot for that discussion. As I said, I can keep on and I have to fight my, my French tendencies and to bring it to a close. So sorry for, for the long, we had

Claudia Winkler: a great discussion. Thanks. Thanks a lot. So very interesting aspects. I will follow up with at least a few of the topics later. So we will have interesting discussions for the rest of the.

Alice Schmidt: We’ll talk again for the puzzle, on each remaining piece of the puzzle.

Antoine Walter: Let’s round that off with a rapid fire question, if that’s fine with you.

Download my Latest Book - for Free!


Rapid fire questions:

And I’m going to open exactly with that one. I understand that fast forward is piece one of the puzzle. What’s the next one?

Claudia Winkler: You know, uh, I’m, I’m always trying to push Elise to write on circularity because I believe a change to a circular business model would be the thing.

I’m not sure if I’m the one writing it, but Elise is definitely an expert on that one. If I would have to choose a puzzle piece, that’s. very relevant and, uh, where there is not enough awareness on for me is definitely circularity. I mean, I love

Alice Schmidt: writing, but I think the next step for me with this would really be to, to make it sort of much more widely known and to be, you know, speaking at lots of conferences about this.

It’s like with the sustainability puzzle, you know, when we got like individual companies and conferences, like, you know, get 500 copies and that’s great because that means a lot of people actually know the topic now and can engage. Do you name

Antoine Walter: one thing that you learned the hard way? I had one

Claudia Winkler: before,

Alice Schmidt: uh, yeah, cause you’d written this down.

Claudia Winkler: I think learning things the hard way is, is, is helping us to progress. So whenever we fail, that’s a good one. To stand up and learn more, but I don’t know a specific topic now. So I have to pass on that one.

Antoine Walter: I’m sorry. So outside of AI. What would be the big trend to watch

Claudia Winkler: out for? There is one message we want to deliver.

It’s like, uh, the sustainability and the tech agenda disconnected. We see a better connection coming up now. So we see luckily a lot of discussions where sustainability and technology, not that much AI, but technology in general are discussed in a more connected context. And we believe. That’s a good development.

And if we look at actions taken in companies where a lot of companies put sustainability due to regulation on their agenda, we are a bit optimistic that this might also include the sustainable use of technology. And this would help us as a society to like create a better future. So I think that’s, that’s a trend we see.

It’s small, but we see it growing and this makes us optimistic. Hopefully it like really grows because there are a lot of things, barriers for this trend, but there are some glimpses that make us hopeful.

Alice Schmidt: So I have something on, uh, the thing I learned the hard way. It’s not what I was saying, wanted to say before, but actually I’ll say it.

One thing I learned the hard way happens to be in the book, and it’s when I was asked by a student to adjust my teaching to her disability, and she was hearing impaired, and I was extremely motivated to help her. And I totally failed. It wasn’t possible for me to. Speak as slowly and directing my speech only at her because it really would have sacrificed my teaching style.

Um, and basically I would have lost, you know, 29 other students and there was such a disappointment. I really failed. She was very unhappy. Um, it didn’t end well. 10 years later, uh, with AI. This would be a totally different game, you know, with voice to text solutions, this student would be able to follow the course and we would all be happier.

So that’s a positive story, I think, on social justice

Antoine Walter: and education. If I instantly became your assistant, what would you delegate to me? As the first task, and I don’t guarantee I’ll do it.

Claudia Winkler: So I have, I have a few things. I totally love how you use all these tools to make your podcast, you know, efficient, uh, good looking, all the advertising.

So you’re a great expert on that already for the past year. So whenever I follow you, I would outsource everything on, um, communication on you because you’re really. great expert on that. So just like think of the nice pictures you do of your podcasts and all this stuff. So I’m pretty sure you have great tools that support you there.

So you get all my trust on doing that.

Alice Schmidt: Yeah. And for me, it’s also clear. I mean, um, if you were our assistant, I’d ask you to just promote our book and our work everywhere. And essentially in terms of impact, really also to make sure that sustainability experts like us. Um, and also experts, most disciplines are having a much bigger, cushier seat at the table where important AI decisions are taking.

Yeah. So basically really collaborating, um, closely with big tech firms.

Antoine Walter: Just taking a side note here. Um, I’ve tried mid journey stable diffusion and Dali about the cartoons. Um, I have to say, uh, I’m working with a very talented cartoonist and, and she’s much better kilometers. Better than artificial intelligence.

I wasn’t looking to replace her at all. Uh, but I was curious about the capability Maybe my prompts are awful. That’s also a possibility, but still there’s still hope for for the humans That parenthesis closed. Uh, would you have someone to recommend me that I should definitely invite last time? You recommended me maria mazzucato I didn’t follow on that because I wanted to read first the various books She she wrote and that’s the kind of book which requests out of my brain.

So, uh, it’s still my uh, On my table. I haven’t fully finished the second one. I read the entrepreneur states with Just for that, thanks for the recommendation. That’s a very wordy, rapid fire question, but if you have a recommendation. I was going to say

Alice Schmidt: Leuchtler and Long, he’s an expert on, um, AI’s environmental impact, but of course that’s very specific.

But really like,

Claudia Winkler: I think he could, could really help, you know, like we did this optimist cafe series again, and we interviewed also like, uh, a wide range of, of experts, and I think the, like really delivered the message very well. The environmental footprint of AI and what. to take away, especially if you’re in the sustainability field.

This is a question you get asked a lot, and I think he’s very good in putting it into perspective. So, and he’s very good at making the points. So I would definitely recommend you to speak to him. It’s very approachable. And one rapid

Alice Schmidt: fire question you didn’t ask what it is. in my job that I’m doing today, but that I will not be doing in 10 years.

And I think that would be sensitizing companies to the need for climate or environmental action and expanding the business case. Cause I think in 10 years time, that’s going to be clearer than we want it to be. Sorry, I had to bring us in there. Cause

Antoine Walter: you’re very, you’re very right. Uh, I, I, I skipped it, uh, unintentionally because that’s one of my favorite ones.

So I’m glad you, you, you’ve provided an answer to it. Alice Clute has been a Pleasure to have you. I have to say you mentioned voice. And how we can fake voices nowadays, with the first interview I had of you, I had sufficient voices to train the AI I’m using for the editing, which is meant if you’re skipping a word that I can just put one, but actually I can write entire sentences and virtually interview you.

So I, I would not have had to, to have that conversation with you. I’m really glad I had the two of you, it’s much better than whatever I could have. come up with myself. So I’m really, really thankful for your time and for everything you shared today. If people want to follow up with you, I guess the sustainability puzzle website is still the place to go.

Is that right?

Alice Schmidt: That is right. And there’s now a new fast forward. tab, which is the name of our book, and since it is part of the puzzle, we’ve integrated the two. And we’re also on LinkedIn a lot, Claudia Winkler and Alice Schmidt, and also our co authors Jeroen Dobbelare and Florian Schütz.

Antoine Walter: So as always, those links are in the show notes.

If you’re watching, listening that, have a look. There’s everything in there. And I hope to have you back for the next piece of the

Alice Schmidt: puzzle. We’d absolutely love that. And I have to congratulate you, Antoine. You’re asking really smart questions and it’s really fun. Yeah. Cause we do do quite a few podcasts and no, you were the first we came to sort of, um, basically ask to talk to you again!

Antoine Walter: Thanks a lot. I appreciate the feedback!

Other Episodes:

Leave a Comment

0 Shares
Share
Tweet
Share
WhatsApp
Pocket