Select Language

English

Down Icon

Select Country

Italy

Down Icon

You're probably using AI wrong (and you don't know it)

You're probably using AI wrong (and you don't know it)

In the midst of an increasingly heated debate on how algorithms and neural networks shape everyday life, Laura Venturini – SEO consultant and communicator attentive to the social implications of technology – signs a manual that calls for collective responsibility. Her new book, “ Prompt Mindset explained easily – In the era of artificial intelligence, true power is the art of asking” (Flaco Edizioni, June 2025), x-rays the prejudices hidden in datasets and shows how they can translate into concrete discrimination: from facial recognition systems that make mistakes on non-binary people, to personnel selection software that has come under fire for racism and ageism .

SEO Consultant Laura Venturini
SEO Consultant Laura Venturini

With a preface by lawyer and activist Cathy La Torre , Venturini highlights the connection between inclusivity, civil rights, and AI ethics, proposing bias audits, direct engagement with marginalized communities, and a new “prompt mindset” that transforms every interaction with the machine into a practice of equity. In the following interview, the author discusses how it is possible to move from requests given to AI to conscious questions, capable of expanding—rather than narrowing— the boundaries of everyone’s rights.

Is Artificial Intelligence a Risk or an Opportunity for Women and Young People?

In the book you talk about “Nothing About Us, Without Us”: what was the most powerful lesson you received directly from a marginalized community while writing, and how did it change your approach to the Prompt Mindset?

“One of the most transformative lessons came during a conversation with a neurodivergent activist, who told me: "We are tired of being talked about by those who don't know us: even artificial intelligence is learning to ignore us" . This simple but powerful sentence forced me to review my approach to designing prompts and, in general, to the relationship between language, power and technology. In my work on biases in generative systems, I had already observed how many of the responses produced by LLMs tended to reflect stereotypes: about women, about people with disabilities, about ethnic minorities. But hearing first-hand what it means to be systematically excluded even in training datasets pushed me to a turning point. I understood that the problem is more of an awareness problem than a technical one. This is why in the Prompt Mindset I wanted to include the principle Nothing about us, without us not only as a slogan, but as a concrete practice: it means that before designing a prompt that concerns a community, it is necessary to dialogue with that community. It means making prompting an exercise in active listening, empathy and co-creation. In my methodology, this translates into a series of guiding questions: Who speaks? Who is heard? Who remains invisible? And above all: How can I reframe the prompt so that it becomes a space of inclusion and not of erasure? This approach has radically changed my work. I no longer look for just the “perfect” prompt to get the best output. I look for the ethical, conscious, inclusive prompt. And I teach people to recognize that each prompt is also a declaration of intent and a political act, because it shapes the way machines represent the world.”

Ai and Inclusion: The Podcast on How Technology Can Create a More Equal World

If you could reimagine a truly inclusive voice assistant from scratch, what would be the first feature – perhaps unexpected – you would introduce to make LGBTQI+, neurodivergent, or disabled people feel represented?

“The first feature I would introduce would be the ability for the user to actively “educate” the assistant about their own experiences, identity and communicative context. I’m not talking about a simple personalized profile, I’m thinking of a structured conversational channel in which the assistant asks, listens and learns from people’s experiences, instead of deducing everything from generic models or statistical biases. Imagine, for example, an assistant who, before even offering answers, asks: “How do you prefer me to address you? Are there terms you want me to avoid? What experiences do you want me to recognize in the way I talk to you?” For a nonbinary person, this would mean not having to correct the assistant every time she insists on the wrong pronouns. For a neurodivergent person, it would mean being able to ask for less ambiguous or more schematic answers. For a person with disabilities, it would mean hearing respectful, up-to-date, and non-pious language. In practice, I would make the assistant “trainable” from relationships and dialogue, not just from a data set, because relationships are where respect is learned. And this process should not be optional or “advanced,” but part of the initial onboarding, a mission statement that says, “Your experience matters, help me learn it.”

New technologies and inclusion: how to promote it in the company

Can you give us a practical example?

"A generic assistant might respond to a question like, 'Tell me what autism is,' with a clinical definition or a list of symptoms. But a co-trained assistant might instead receive this relationship-based prompt: 'Explain autism to a 10-year-old, using respectful, non-pathologizing language, avoiding the word 'disorder.' I prefer it to be presented as a neurodivergence, not a deficit.' The result? More inclusive, but also more accurate, more human content. This is the heart of the Prompt Mindset: moving from commanded prompts to mindful prompts. Only in this way can artificial intelligence truly become a space of alliance and not alienation."

Faced with a medical algorithm that underestimates the severity of black patients’ conditions, what form of “bottom-up control” would you entrust to those directly involved to overturn the typical power relationship between developers and users?

“I would entrust the power to interrogate and correct the model through public conversational audit interfaces, based on narrative prompts built by patients themselves, their communities and advocacy networks. Too often, health algorithms are based on historical data intrinsically distorted by decades of systemic racism in medicine, historically based on the implicit norm of the white man. If this data is not deconstructed with tools of critical reading, the algorithm only amplifies injustice with the authority of mathematical neutrality. To reverse this imbalance, I propose a form of collective intelligence from below: creating spaces where the people impacted, in this case black patients, can see, test and challenge the outputs of the algorithm, with tools of speech and agency”.

Give us another example.

“Imagine a platform where a patient can say: "I reported severe and persistent pain, but the algorithm suggested a low priority. I would like to reformulate the data taking into account the fact that black women's pain is often underestimated in clinical settings. Show me the differences if the same report came from a white woman of the same age." This is a critical prompt, which calls into question both the output and the cultural context in which the model was trained. And it allows us to highlight biases that would otherwise remain invisible. True bottom-up control is not just a matter of transparency but the possibility of rewriting. Giving the possibility of dialogue with the model means recognizing that those who suffer discrimination also have the competence to identify it and to propose an alternative, more equitable version. The prompt is not just a command, but a space for claims. If the data is politics, then even the prompt can become activism."

Imagine having a minute on stage at the World AI Developers Conference: what provocation would you launch to convince them that diversity in teams is not just “ethical”, but a competitive advantage in terms of product quality?

“You have in your hands the power to design what the world will hear, read, learn. But you are missing something. You are missing those who live that world from the margins, from exceptions, from invisible possibilities. It is not enough to optimize a model for correctness. You have to broaden the mind that imagines it. Diversity is not a box to be checked: it is the only technology capable of preventing systemic failure. Because an algorithm trained only on the norm fails when it encounters reality. And reality is never an average. Bring people into your teams who see the error before it becomes damage. Who read between the lines because they live in the white spaces. Don't just make artificial intelligence. Make room for human intelligence. The radical, multiple, uncomfortable kind. The kind that really changes the world. Because products that really work are born when those who design stop thinking for themselves and start building for everyone”.