0

Women in AI: Rachel Coldicutt researches how technology impacts society | TechCrunch

To give AI-focused female academics and others their deserved – and overdue – time in the spotlight, TechCrunch is publishing a series of interviews Focused on notable women who have contributed to the AI ​​revolution. We’re publishing these pieces throughout the year as the AI ​​boom continues, highlighting key work that often goes unrecognized. Read more profiles Here,

Today’s headlines: Rachel Koldykut is the founder of careful industry, which researches the social impact technology has on society. Customers include Salesforce and the Royal Academy of Engineering. Before Careful Industries, Coldicutt was CEO at the think tank Dotvarion, which also researched how technology is impacting society.

Before Doteveryone, he spent decades working in digital strategy for companies such as the BBC and the Royal Opera House. He attended the University of Cambridge and received the OBE (Order of the British Empire) honor for his work in digital technology.

Briefly, how did you get your start in AI? What attracted you to this field?

I started working in the tech sector in the mid-90s. My first proper technical job was working on Microsoft Encarta in 1997, and before that, I helped build content databases for reference books and dictionaries. Over the past three decades, I’ve worked with all kinds of new and emerging technologies, so it’s hard to pinpoint the exact moment when I “got into AI” because I love using automated processes and data to make decisions, create experiences. I am doing it. And has been creating artworks since the 2000s. Instead, I think the question is probably, “When did AI become the set of technologies that everyone wanted to talk about?” And I think the answer is probably around 2014 when DeepMind was acquired by Google – that was the moment in the UK when AI overtook everything else, even though a lot of the underlying technologies we know now” AI” were things that already existed in fairly general use.

I came to work in the tech sector almost by accident in the 1990s, and what has kept me in the field through many changes is the fact that it is full of fascinating paradoxes: I love learning new skills and I’m fascinated by how empowering making things can be, by what we can discover from structured data, and would happily spend the rest of my life observing and understanding how people create and shape the technologies we use. Let’s give.

What work in the AI ​​field are you most proud of?

A lot of my AI work has been in policymaking and social impact assessments, working with government departments, charities and businesses of all kinds to help them use AI and related technology in intentional and trustworthy ways.

In the 2010s I ran Dotverine – a responsible tech think tank – which helped change the way UK policymakers think about emerging technology. Our work made it clear that AI is not a consequence-free set of technologies, but rather something that has real-world implications for people and societies. In particular, I’m really proud of the free Result Scanning Tool We developed it, which is now used by teams and businesses around the world to help them estimate the social, environmental and political impacts of choices made when shipping new products and features.

Most recently, 2023 AI and Society Forum It was another proud moment. In preparation for the UK government’s industry-dominated AI Safety Forum, my team at Care Trouble quickly convened a gathering of 150 people from civil society and collectively made the case that it is possible to make AI work for 8 billion people . , not just 8 billionaires.

How do you deal with the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

As a relative old-timer in the tech world, I feel like some of the gains we’ve made in gender representation in tech have been lost over the last five years. Research from the Turing Institute shows that less than 1% of investments made in the AI ​​sector have been in women-led startups, while women still make up only a quarter of the overall tech workforce. When I go to AI conferences and events, the gender mix – especially in terms of who gets the platform to share their work – reminds me of the early 2000s, which I find really sad and shocking. Looks like one.

I’ve been able to deal with the sexist attitudes of the tech industry because I’ve had the great privilege of being able to set up and run my own organization: I spent much of my early career experiencing gender discrimination and sexual harassment on a daily basis – Dealing with it gets in the way of doing great work and is an unnecessary cost of entry for many women. Instead, I’ve prioritized building a feminist business where, collectively, we strive for equality in everything we do, and my hope is that we can show that other ways are possible.

What advice would you give to women wanting to enter the AI ​​field?

Don’t feel like you have to work in the “women’s issues” field, don’t get distracted by publicity, and seek out peers and make friendships with other people so you have an active support network. What has kept me going all these years is my network of friends, former colleagues and collaborators – we provide each other with mutual support, a never-ending supply of spirited conversation and the occasional cry. Provide a shoulder for. Without it, it can feel very lonely; You’re often the only woman in the room, so it’s important to have a safe place to decompress.

As soon as you get a chance, get hired for a good job. Don’t copy structures you’ve seen or set expectations and norms of an elitist, sexist industry. Challenge the status quo and support your new employees every time you hire. This way, you can start building a new normal wherever you are.

And seek out the work of some of the great women leading AI research and practice: Start reading the work of pioneers like Abeba Birhane, Timnit Gebru, and Joy Buolamwini, who have all produced foundational research that has shaped our understanding of AI. Have given. Changes and interacts with society.

What are some of the most pressing issues facing AI as it develops?

AI is an intensity. It may feel like some uses are inevitable, but as a society, we need to be empowered to make clear choices about what is worth indulging in. At the moment, the main thing that the increasing use of AI is doing is increasing the power and bank balances of a relatively small number of male CEOs and that does not seem possible. [it] Shaping a world that many people want to live in. I would love to see more people, especially in industry and policy-making, engaging with the question of what a more democratic and accountable AI looks like and whether it is even possible.

The climate impacts of AI – the use of water, energy and critical minerals – and the impact on the health and social justice of people and communities affected by the exploitation of natural resources must be at the top of the list for responsible development. The fact is that LLMs, in particular, are so energy intensive that the current model is not suitable for this purpose; In 2024, we need innovation that protects and restores the natural world, and extractive models and ways of working need to be retired.

We also need to be realistic about the surveillance effects of a more data-driven society and the fact that – in an increasingly unstable world – any general purpose technologies will be used to wreak unimaginable horrors in war. Everyone working in AI must be realistic about the historical, long-standing relationship of technological R&D with military development; We need to champion, support and demand innovation that starts in communities and is governed by communities so that we get outcomes that strengthen societies, not lead to increased destruction.

What issues should AI users be aware of?

It’s really important to think about the day-to-day impacts of the increasing use of AI and what it means for everyday human interactions, alongside the environmental and economic extraction involved in current AI business and technology models.

While some of the issues that have made headlines relate to more existential risks, it’s worth taking a look at how the technologies you use are helping and hindering you on a daily basis: Which automations can you turn off and on, which automations provide real benefits, and where as a consumer you can vote to prove that you’re actually dealing with a real person. Want to talk, not to a bot? We don’t need to settle for poor-quality automation and we must unite for better results!

What’s the best way to create AI responsibly?,

Responsible AI starts with good strategic choices – instead of just throwing in an algorithm and hoping for the best, it’s possible to be intentional about what to automate and how. I’ve been talking about the idea of ​​”just enough internet” for the past few years, and it seems to be a really useful idea to guide how we think about building any new technology. Instead of pushing the boundaries all the time, can we build AI in a way that maximizes benefits and minimizes harm to people and the planet?

we have developed a robust process At Careful Trouble, where we work with boards and senior teams, it starts with mapping out how AI can and cannot support your vision and values; Understanding where problems are too complex and dynamic to be improved by automation, and where it would benefit; And finally, developing a proactive risk management framework. Responsible development is not a one-time application of a set of principles, but an ongoing process of monitoring and mitigation. Continuous deployment and social adaptation means that quality assurance cannot be something that ends after shipment of the product; As AI developers, we need to build the capacity for iterative, social sensing, and treat responsible development and deployment as a living process.

How can investors better push for responsible AI?

By investing more patiently, supporting more diverse founders and teams, and not looking for exponential returns.

women-in-ai-rachel-coldicutt-researches-how-technology-impacts-society-techcrunch