Connect with us

Hi, what are you looking for?

Metaverse CapitalistsMetaverse Capitalists

Investing

Financial Regulators’ Open-Source Crackdown Sets Bad Precedent for AI, DeFi, and Innovation

Jack Solowey

Washington policymakers are consumed with AI concern. Fears run the gamut from existential threats to humanity to chatbots fibbing. In recent weeks, AI entrepreneurs and policy thinkers have helped to frame one of AI’s principle risks as the possible threat posed to political stability and continuity. In a thoughtful multipart series on “AI and Leviathan,” for example, Samuel Hammond (senior economist at the Foundation for American Innovation) argues that “[d]emocratized AI is a much greater regime change threat than the internet” and “[t]he moment governments realize that AI is a threat to their sovereignty, they will be tempted to clamp down in a totalitarian fashion.”

It’s wise to expect that the prospect of dizzying changes threatening the established order will incline states toward aggressive counterreactions. Indeed, we already see early signs of this in financial regulators’ response to autonomous and self‐​executing financial tools (e.g., smart contracts on cryptocurrency blockchains). Notably, smart contracts and certain AI models share a common feature that, when paired with the ability to operate with limited human intervention, can be particularly disruptive to existing regulatory methods: open‐​source code that is freely reproducible.

Even if open‐​source AI models constitute the minority of key foundation models, the fact that enough relatively advanced AI models are readily copyable (not to mention portable and storable) poses a clear challenge to governments looking to exert control over AI. Consequently, there’s an emerging policy battle over the desirability of open‐​source AI.

Unfortunately, financial regulators have led the way in cracking down on novel, open‐​source technologies. In doing so, they risk creating dangerous precedents for the use of open‐​source software—AI-based and otherwise—in both financial applications and in tech innovation more broadly. Before continuing further down this fraught path, policymakers must carefully consider the potential benefits of open‐​source software development that will be lost to knee‐​jerk policy reactions.

Fundamentally, open‐​source software is an intellectual property question: whether the code’s authors will license the free use, copying, modification, and distribution of their software without the need to seek those authors’ permission (the authors themselves typically disclaim liability in the process). Open‐​source licenses facilitate creatively remixing software, as well as ecosystems that foster iterative improvements.

Importantly, open‐​source licenses also give code something of a life unto itself, as it can continue—through the work of developer communities—to evolve and multiply beyond the reach of the original authors.

Open‐​source software therefore can pose a challenge to government agencies accustomed to regulating products and services by regulating their providers. If the government has a problem with OpenAI’s software, they haul in OpenAI. But if they have a problem with any of the tens of thousands of open‐​source AI models, who (or what) gets named and blamed?

Open‐​source AI critics fear that averting and remediating any harms associated with AI models will be seriously hampered by a lack of namable and blamable developers with end‐​to‐​end control over the code or, in the financial services context, human professionals holding out a shingle and visibly shouldering a fiduciary duty. Yet others take precisely the opposite position on open‐​source AI, arguing that the ability to freely use, modify, and distribute AI models will be essential to tackling AI “safety” and fallibility problems. One way in which this could play out is open‐​source licenses simply allowing more minds to work on these challenges and, in turn, make the fruits of their research freely available.

Notwithstanding these hard and high‐​stakes questions, the financial regulatory leviathan already has charged headlong into criminalizing the use of certain open‐​source software when the existence of a sanctionable provider is, at the very least, contestable. The Treasury Department’s Office of Foreign Assets Control (OFAC) has been breaking new ground in sanctioning—i.e., prohibiting transactions with—open-source software itself.

Specifically, in August 2022, the OFAC added the Ethereum blockchain addresses of Tornado Cash—a tool for enhancing cryptocurrency transaction privacy—to the sanctioned persons list in connection with the tool’s alleged use by North Korean state‐​sponsored hackers to launder funds.

Tornado Cash users sued the Treasury Department to vacate the sanctions designation. They contended, among other things, that the Tornado Cash developers and token holders were not properly considered a sanctionable “entity” and that the decentralized, open‐​source, and immutable Tornado Cash software was not properly considered sanctionable “property” under relevant law. On August 17, 2023, the court found in favor of the Treasury Department on these issues.

Regardless of whether one thinks the court got it right in the case before it (plaintiffs faced challenging deference standards on interpretive questions), Tornado Cash shows an emerging government suspicion of open‐​source software, with financial regulators at the forefront.

For regulators to continue down this path would risk creating further dangerous precedent. Indeed, the Tornado Cash plaintiffs noted the chilling effect the sanctions designation had on software development. Policymakers should be wary of this chilling effect and the potential lost benefits when it comes to open‐​source financial technology, as well as open‐​source software more broadly.

In the financial context, increasing the risks of publishing open‐​source tools undermines privacy‐​enhancing technologies and the broader use of autonomous financial services that mitigate traditional intermediary risks. In addition, where the suppression extends to open‐​source AI, the potential foregone benefits include the ability of both financial institutions and individuals to run open‐​source AI models on their own hardware to improve processing speed, maintain the confidentiality of personal data, and achieve greater interoperability and customizability.

Notably, the use of more bespoke open‐​source AI models in finance could help to address regulators’ fears of herding behavior due to mono‐​models. Moreover, leveraging experimental tools for autonomous task performance (e.g., an AI agent that could help to organize one’s financial life) thus far is largely a matter of using open‐​source projects. None of this is to say that open‐​source software is always the right tool for the job, and there may ultimately be market forces that make open‐​source models less competitive. But that’s no reason for regulators to put their thumbs on the scale.

As for cutting‐​edge software more broadly, policymakers should consider the role that open‐​source software development may play in discovering and disseminating standards for better aligning AI models (i.e., averting existential risks like civilizational collapse, or worse). Policymakers must steelman the arguments for an open‐​source approach to alignment, including the example of high security standards achieved by community vetting in other open‐​source ecosystems, such as that of the Linux operating system. And even if after careful analysis it’s found that the risks of open‐​source tinkering on sufficiently advanced AI models exceed the benefits at a given moment (given the limits of alignment knowledge at that point), policymakers should not parlay that into a reason to blanket ban open‐​source AI models including those short of the technological frontier.

There are good reasons to expect advances in AI to have transformative impacts on society, including states themselves. And it should come as no surprise that incumbent authorities will react aggressively when perceiving threats to business as usual; indeed, we’ve already seen this in financial regulators sanctioning disintermediated financial tools. But fear of disruption does not justify overreaction in the financial regulatory context or elsewhere.

Notably, when Hammond identified democratized AI as a “greater regime change threat than the internet,” he highlighted that the Chinese Communist Party is already proceeding on that basis. Liberal democracies can and must do better and should have greater confidence in their adaptability to technological change. Reactive policy that targets open‐​source software development carries its own risks. And tilting against open‐​source software without careful deliberation on where that leads is one of the riskiest options of all.

If you’re interested in further discussion on these issues, please join the Cato Institute’s Center for Monetary and Financial Alternatives for a conversation on open‐​source financial technology and broader questions of crypto regulation and competitiveness next Thursday, September 7, 2023.

    You May Also Like

    Stocks

    In this edition of StockCharts TV‘s The Final Bar, Dave shows how breadth conditions have evolved so far in August, highlights the renewed strength in the...

    Business

    In the UK, the care sector is under incredible strain, it’s good to know there are people working hard to address the issue. One...

    Business

    With the increased threat of industrial strike action looming across the UK, we consider whether a force majeure clause can strike the right chord...

    Politics

    On January 10, the French government announced plans to raise the retirement age from 62 to 64. The change would mean that after 2027,...

    Dislaimer: pinnacleofinvestment.com, its managers, its employees, and assigns (collectively “The Company”) do not make any guarantee or warranty about what is advertised above. Information provided by this website is for research purposes only and should not be considered as personalized financial advice. The Company is not affiliated with, nor does it receive compensation from, any specific security. The Company is not registered or licensed by any governing body in any jurisdiction to give investing advice or provide investment recommendation. Any investments recommended here should be taken into consideration only after consulting with your investment advisor and after reviewing the prospectus or financial statements of the company.

    Copyright © 2024 metaversecapitalists.com | All Rights Reserved