Logo ||| Logo facebook Adong


OpenAI announces a team to incorporate crowdsourced governance ideas into its models

Condividi su Facebook Condividi su Telegram Condividi su Twitter Visita Instagram Condividi su Threads

OpenAI is taking steps to incorporate public input on how to ensure its future artificial intelligence (AI) models "align with humanity's values." The AI startup is creating a new team called Collective Alignment, consisting of researchers and engineers, with the goal of developing a system to collect and "codify" public input on the behavior of its models in OpenAI's products and services.

"While we continue to work with external consultants and research teams, including starting pilots to incorporate...prototypes into guiding our models, we're also looking for...research engineers with diverse technical backgrounds to help us develop this work with us," OpenAI wrote in a blog post.

The Collective Alignment team is an evolution of OpenAI's public input program, launched last May, which awarded grants to fund experiments aimed at creating a "democratic process" to decide what rules AI systems should follow. The program's goal, as stated by OpenAI at its inception, was to fund individuals, teams, and organizations to develop proofs of concept that could address questions about guidelines and governance for AI.

In today's blog post, OpenAI summarized the work of grant recipients, which ranged from video chat interfaces to crowdsourced model audits and "approaches to mapping beliefs into dimensions that can be used to refine model behavior." All the code used in the work of grant recipients was made public this morning, along with brief summaries of each proposal and high-level considerations.

OpenAI has tried to present the program as separate from its commercial interests. However, this is somewhat difficult to accept, considering OpenAI CEO Sam Altman's criticisms of regulation in the EU and elsewhere. Altman, along with OpenAI President Greg Brockman and Chief Scientist Ilya Sutskever, have repeatedly argued that the pace of innovation in AI is so fast that we cannot expect existing authorities to adequately control the technology, hence the need to involve the crowd in the work.

Some of OpenAI's rivals, including Meta, have accused OpenAI (among others) of seeking to achieve a "regulatory capture of the AI industry" by pushing against open AI research and development. OpenAI denies this, as was to be expected, and would likely refer to the grant program (and the Collective Alignment team) as an example of its "openness."

In any case, OpenAI is increasingly under the scrutiny of regulators, facing an investigation in the UK into its relationship with partner and investor Microsoft. The startup has recently sought to reduce regulatory risk in the EU regarding data privacy, using an Irish-based subsidiary to limit the ability of certain EU privacy watchdogs to unilaterally act on certain concerns.

Yesterday – in part to reassure regulators, no doubt – OpenAI announced that it is working with organizations to try to limit the ways in which its technology could be used to influence or condition elections through malicious means. The startup's efforts include making it more clear when images are generated by AI using its tools and developing approaches to identify AI-generated content even after images have been modified.