3 liens privés
AI policy is now prioritizing energy, not deepfakes. OpenAI's spending makes clear how much it wants to shape the new rules.
Artificial intelligence (AI) is the most discussed technology of recent years. Advocates promise that it will help overcome productivity challenges and radically transform the economy through increased wage gains and higher economic output, among other benefits.
Productivity is a key ingredient in future economic growth and standard of living, as it offers the potential to increase output without increasing inputs—like worker hours, natural resources, and investment costs. Yet, in past waves of innovation, we have seen patterns where a technology achieves widespread adoption, without any evidence of it increasing productivity. Will this time be different?
In this study, we tackle the critical question of whether AI adoption leads to productivity improvement at the firm level. Evidence of productivity gains from AI use is mixed. There is no conclusive evidence of a strong positive or negative relationship between AI adoption and short-term productivity improvement.
The set of firms that adopted AI were already more productive than their peers, but the decision to adopt AI did not increase the rate at which their productivity grew.
Use of AI on images of the dead is unregulated in the country, leaving cybersecurity experts worried about the potential for deepfakes and identity theft.
J. García López, a funeral home in Mexico that launched its Día de Muertos campaign in October, received over 15,000 requests to create AI-generated videos of deceased persons. Daniela Rojas, senior program officer at Eon Institute, an AI-focused Mexican think tank, expressed concerns about how such companies store people’s images and biometrics.
Using AI to resurrect the dead has raised ethical questions elsewhere. In 2020, Jang Ji-sung, a mother of four in South Korea, was virtually reunited with an AI-generated avatar of her dead 7-year-old daughter. Ji-sung had said this helped her say farewell to her child, but “many psychologists have come up and said this might, in some cases, make the grieving process longer”. The discussion has yet to take hold in Mexico, where the practice of digital resurrections exploded in popularity this year.
In the late summer, Google surveyed 1,005 full-time knowledge workers, age 22-39, who are either in leadership roles or aspire to one. 93% of Gen Z respondents, age 22 - 27 and 79% of millennials (28 - 39), said they were using two or more AI tools a week — such as ChatGPT, DALL-E, Otter.ai, and other generative AI products.
This paper examines ‘open’ AI. Claims about ‘open’ AI often lack precision, frequently eliding scrutiny of substantial industry concentration in large-scale AI development and deployment, and often incorrectly applying understandings of ‘open’ imported from free and open-source software to AI systems. At present, powerful actors are seeking to shape policy using claims that ‘open’ AI is either beneficial to innovation and democracy, on the one hand, or detrimental to safety, on the other. When policy is being shaped, definitions matter.
To add clarity to this debate, we examine the basis for claims of openness in AI, and offer a material analysis of what AI is and what ‘openness’ in AI can and cannot provide: examining models, data, labour, frameworks, and computational power. We highlight three main affordances of ‘open’ AI, namely transparency, reusability, and extensibility, and we observe that maximally ‘open’ AI allows some forms of oversight and experimentation on top of existing models. However, we find that openness alone does not perturb the concentration of power in AI. Just as many traditional open-source software projects were co-opted in various ways by large technology companies, we show how rhetoric around ‘open’ AI is frequently wielded in ways that exacerbate rather than reduce concentration of power in the AI sector.
This is the first time the carbon emissions caused by using an AI model for different tasks have been calculated.
In fact, generating an image using a powerful AI model takes as much energy as fully charging your smartphone, according to a new study by researchers at the AI startup Hugging Face and Carnegie Mellon University. However, they found that using an AI model to generate text is significantly less energy-intensive. Creating text 1,000 times only uses as much energy as 16% of a full smartphone charge.
Their work, which is yet to be peer reviewed, shows that while training massive AI models is incredibly energy intensive, it’s only one part of the puzzle. Most of their carbon footprint comes from their actual use.
The AI tools provided by companies like Palantir and Clearview raise questions about when and how invasive tech should be used in wartime.
“Ukraine is a living laboratory in which some of these AI-enabled systems can reach maturity through live experiments and constant, quick reiteration,” says Jorritt Kaminga, the director of global policy at RAIN, a research firm that specializes in defense AI. Yet much of the new power will reside in the hands of private companies, not governments accountable to their people.
“This is the first time ever, in a war, that most of the critical technologies are not coming from federally funded research labs but commercial technologies off the shelf,” says Steve Blank, a tech veteran and co-founder of the Gordian Knot Center for National Security Innovation at Stanford University. “And there’s a marketplace for this stuff. So the genie’s out of the bottle.”
Four in five (79%) online teenagers aged 13-17 now use generative AI tools and services, with a significant minority of younger children aged 7-12 also adopting the technology (40%).
Adult internet users aged 16 and above are, on average, comparatively more reluctant users of generative AI (31%). Among those who have never used this technology (69%), nearly one in four have no idea what it is (24%).
Snapchat My AI - which became freely available to all Snap users in April 2023 - is the most popular generative AI tool among children and teens, used by half (51%) of online 7–17-year-olds. Online teenage girls are its most avid users (75%).
ChatGPT is the most widely used generative AI service among internet users aged 16 and above (23%). Among online youngsters aged 7-17, boys are keener users of ChatGPT than girls (34% versus 14%).
Discover how actionable AI empowers systems to understand human inputs and take proactive actions based on context and learned behaviors.
Think of LAMs as intelligent assistants who not only understand your requests but also take initiative to fulfill them. This unique ability to combine language understanding with autonomous action holds immense potential for transforming various aspects of our lives.
Why is it that so many companies that rely on monetizing the data of their users seem to be extremely hot on AI? If you ask Signal president Meredith Whittaker (and I did), she’ll tell you it’s simply because “AI is a surveillance technology.”
“It requires the surveillance business model; it’s an exacerbation of what we’ve seen since the late ’90s and the development of surveillance advertising. AI is a way, I think, to entrench and expand the surveillance business model,” she said.
Rumman Chowdhury, Timnit Gebru, Safiya Noble, Seeta Peña Gangadharan, and Joy Buolamwini open up about their artificial intelligence fears. Today the risks of artificial intelligence are clear — but the warning signs have been there all along.
Artificial intelligence continues to be fed racist and sexist training materials and then distributed around the world.
AI systems are often trained on gargantuan data sets, usually scraped from the web for cost-effectiveness and ease. But this means AI can inherit all the biases of the humans who design them, and any present in the data that feeds them. The end result mirrors society, with all the ugliness baked in.
The deal represents the first corporate agreement for multiple deployments of a single advanced reactor design in the United States.
Alameda, CA – October 14, 2024 – Kairos Power and Google have signed a Master Plant Development Agreement, creating a path to deploy a U.S. fleet of advanced nuclear power projects totaling 500 MW by 2035. Under the agreement, Kairos Power will develop, construct, and operate a series of advanced reactor plants and sell energy, ancillary services, and environmental attributes to Google under Power Purchase Agreements (PPAs). Plants will be sited in relevant service territories to supply clean electricity to Google data centers, with the first deployment by 2030 to support Google’s 24/7 carbon-free energy and net zero goals.
I’m beginning to suspect that one of the most common misconceptions about LLMs such as ChatGPT involves how “training” works. A common complaint I see about these tools is that people don’t want to even try them out because they don’t want to contribute to their training data. This is by no means an irrational position to take, but it does often correspond to an incorrect mental model about how these tools work.
Short version: ChatGPT and other similar tools do not directly learn from and memorize everything that you say to them.
The popularisation of artificial intelligence (AI) has given rise to imaginaries that invite alienation and mystification. At a time when these technologies seem to be consolidating, it is pertinent to map their connections with human activities and more than human territories. What set of extractions, agencies and resources allow us to converse online with a text-generating tool or to obtain images in a matter of seconds?
There are many use cases for generative AI, spanning a vast number of areas of domestic and work life. Looking through thousands of comments on sites such as Reddit and Quora, the author’s team found that the use of this technology is as wide-ranging as the problems we encounter in our lives. The 100 categories they identified can be divided into six top-level themes, which give an immediate sense of what generative AI is being used for: Technical Assistance & Troubleshooting (23%), Content Creation & Editing (22%), Personal & Professional Support (17%), Learning & Education (15%), Creativity & Recreation (13%), Research, Analysis & Decision Making (10%).
Artificial intelligence had its breakout year in 2023, with large language models (LLMs) and text-to-image generators capturing the attention and imagination of technologists and investors alike.
Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves.
Why call the errors “hallucinations” at all? Why not algorithmic junk? Or glitches? Well, hallucination refers to the mysterious capacity of the human brain to perceive phenomena that are not present, at least not in conventional, materialist terms. By appropriating a word commonly used in psychology, psychedelics and various forms of mysticism, AI’s boosters, while acknowledging the fallibility of their machines, are simultaneously feeding the sector’s most cherished mythology: that by building these large language models, and training them on everything that we humans have written, said and represented visually, they are in the process of birthing an animate intelligence on the cusp of sparking an evolutionary leap for our species.
Artificial intelligence (AI) has an environmental cost. Beginning with the extraction of raw materials and the manufacturing of AI infrastructure, and culminating in real-time interactions with users, every aspect of the AI lifecycle consumes natural resources – energy, water, and minerals – and releases greenhouse gases. The amount of energy needed to power AI now outpaces what renewable energy sources can provide, and the rapidly increasing usage of AI portends significant environmental consequences. The goal of this primer is to shed light on the environmental impacts of the full AI lifecycle, describing which kinds of impacts are at play when, and why they matter.