Featured

AI Visionary And Responsible AI Leader Linda Leopold Of H&M Group

AI Visionary And Responsible AI Leader Linda Leopold Of H&M Group
Written by Publishing Team

My Conversation with one of the most influential, inspirational and brilliant minds of our times. Linda Leopold of H&M Group, her untraditional path to the field of AI paves the way for the women leaders of the future.

In my ten-part series of The 9 Inspirational Women Leaders In AI Shaping The 21st Century, I was thrilled to have an in-depth and thought-provoking conversation with Linda Leopold of H&M Group. As the Head of Responsible AI & Data, she leads the company’s work on sustainable and ethical artificial intelligence and data. After many years in the media industry, she joined the AI ​​department at H&M Group in 2018. She is a former Editor-in-Chief at the critically acclaimed fashion magazine Bon and the author of two non-fiction books. She has been a columnist for Scandinavia’s most prominent financial newspaper and has worked as an innovation strategist at the intersection of fashion and technology.

With a history in writing and fashion, the move to the technology field seems unexpected. What brought you to technology and AI? What created this passion?

I don’t have a traditional background in tech or science. I studied journalism and worked for many years in the media industry, first as a news reporter and then as a magazine editor-in-chief, while also writing books. However, about seven years ago, I decided to make a significant career change and resigned from my job as editor-in-chief.

At that time, I knew I wanted to work in tech but, to be honest, I didn’t have an exact plan. I had recently written a book about human intelligence, and I was fascinated by the development of AI, in particular deep learning.

I have always been drawn to the topics that define our times. And I felt that the development in tech and AI was just too interesting and too important not to be part of. So I started studying deep learning. I hadn’t written a single line of code before. And I can clearly remember the first time I wrote this small script for a neural network, telling the difference between images of cats and dogs, a straightforward computer vision algorithm. It was such a thrilling experience! I think it’s probably one of the most significant Aha! Moments in my life. It was so powerful and simultaneously, so simple and beautiful. Then and there, I felt that this was probably something I would like to work with for the rest of my life.

What events lead you to dedicate your life to sustainable and ethical artificial intelligence and data science?

It started with my passion for deep learning. And very soon, I realized that we always need to consider the dual nature of AI. It is a compelling technology with great potential to solve some of humanity’s biggest challenges. But, at the same time, we need to handle it with care to prevent causing any unintentional harm.

I often think of AI as this toddler with superpowers. Even though the basic principles of today’s AI were created in the 50s, it’s still relatively new as applied technology. It’s still very immature because we humans haven’t learned to foresee all the consequences of the implementation of AI.

Looking at the fashion industry specifically, I could see the potential of using AI to make the industry more sustainable and reach the vision of a circular fashion industry. That sparked my interest to ask, “what can we do with this amazing technology in the fashion industry?”

When I joined the H&M group in 2018, at the same time the AI ​​department was founded, I started looking into what responsible AI would mean to the company. From the very start, we’ve had these two goals:

1. We want to use AI to do good and to reach our sustainability goals

2. We want to work with it carefully, making sure that we don’t cause any unintentional harm

What is something that genuinely stresses and showcases the critical importance of converging sustainability and artificial intelligence?

One clear example is demand prediction. We want to forecast what our customers will want to buy and what type of fashion they will love in the short and long-term future. Here, AI plays a significant role. The goal is only to produce what we can sell, and by analyzing a large amount of data from our operations, we can become much sharper in aligning supply and demand.

This allows us to ensure that we have the right type and quantity of clothes in the right store at the right time. We had a recent pilot project in collaboration with one of our brands – Weekday, called Body Scan Jeans. It is personalized, on-demand manufacturing of denim. The customer gets their body 3D scanned, and then the jeans are produced according to their measurements. That’s taking it even one step further and an inspiring example of using AI for sustainability.

How do you define responsible AI?

There are many definitions of what responsibility means. For us, it’s always been a dual ambition to both use AI for good and prevent causing harm.

I would emphasize the multidisciplinary nature of AI. To talk about responsibility, you need to bring a lot of perspectives together. Responsible AI merges data science and machine learning with human rights, sustainability, ethics, and law.

Where are we in terms of the journey maturity of responsible AI? Where are we as an industry, and what do you think needs to be done to put more emphasis on responsibility?

Most industries now have realized the importance of actively working to implement responsible AI practices. That’s a start.

I also believe that most companies have created some high-level ethical principles. So now it’s much more about implementation as we strive towards fairness, transparency, etc. How do we make it work in practice? That is really where the hard part starts. How do we integrate responsibility into our processes, and how do we make it part of the company culture?

Could you share a bit of the H&M group Responsible AI framework and your contribution to the success of that framework?

That was one of the first things that I started working on with my team. We began by creating a very hands-on checklist based on nine principles.

Our AI should be:

  1. focused
  2. beneficial
  3. fair
  4. transparent
  5. governed
  6. sleepy
  7. reliable
  8. respecting the privacy
  9. secure

My philosophy from the very beginning has been to work with these types of hands-on tools, like a checklist or technical methods for implementing responsible AI, and then, at the same time, work with culture and awareness-raising

Culture is something that I would like to emphasize. In the discussions on responsible AI, there’s usually a lot of focus on technical tools and principles. But literacy and community are so critical, striving towards creating a culture of responsible AI across the company, where ethics and responsible practices become top of mind for people.

Here my background as a journalist and writer has been beneficial as we’ve been using a lot of storytelling in this work. It’s a potent tool for creating understanding and engagement. For example, we have hosted debate sessions on ethical dilemmas related to AI, written like short science fiction stories. We call it the Ethical AI Debate Club. That’s a way of getting people to understand the topic of AI ethics and responsibility in a fun and engaging way.

Can you share a bit of your work with UNICEF, especially policy guidance on AI for children? What are some of the latest trends in developments there?

UNICEF selected the H&M group as one of the partners to pilot their policy guidance on AI for children. We worked on this during 2021, resulting in a case study published at the end of the year. This case study outlines and describes our approach to responsible AI, our work, and how we have updated our framework to make it more child-centered.

That’s something we constantly strive for. It’s never a set framework, and we always want to fine-tune it, enrich it and bring new perspectives. Here we tried to look specifically at children’s rights and see what we could do to improve our framework with the child rights lens. One insight was that we need to look at potential indirect implications for children. So even if we don’t have any AI products that target children specifically, it could be that children are interacting with a chatbot, for example, or using their parent’s device. We always need to consider that.

So we did some updates to our checklist and our framework to ensure that children’s specific needs are always addressed when we do the assessments of AI products.

What is your advice for the next generation regarding women getting into the stem, science and technology, mathematics, and specifically AI? How do we get more women and girls to participate in this exciting field?

First of all, this is a significant point to make. This is an equality issue. Women have been historically underrepresented in STEM and still are today. That’s a problem for many reasons. Our lives are probably being shaped by technology and will be so even more in the future. Women must have a hand in shaping this future. So, what can we do about it?

One way is to highlight role models, as you are doing in this article series, and perhaps show that there are different ways into tech. I try to be out, speaking as often as possible at events targeting women specifically and young women entering the field. I think my own story can illustrate that there are many ways into technology. My path has not been the common or straightforward one. I was thinking back on my career, and I can’t help but wonder if I would have chosen a career in tech from the beginning if there had been more female role models.

With leaders and role models like Linda Leopold, women can find the inspiration and pathway to become leaders in AI and tech. A point Linda made that I want to reemphasize is that the future of the world will be shaped by technology. And that technology will be created by people. We must ensure that an equal number of those creating the technology that will create our future are women. That is the true path to ethical, responsible and unbiased AI.

About Linda Leopold, Head of Responsible AI & Data at H&M Group

Linda is Head of Responsible AI & Data at H&M Group. She leads the company’s work on sustainable and ethical artificial intelligence and data. She joined the AI ​​department at H&M Group in 2018, after many years in the media industry and as a fashion magazine Editor-in-Chief. She is also the author of two non-fiction books.

.

About the author

Publishing Team