Quantum Economy and Technology Innovation: A Breakdown
The Quantum Economic Development Consortium (QED-C) held its first Quantum Technology Showcase on Capitol Hill, highlighting the importance of the entire innovation ecosystem to advancing quantum technologies. The event featured twenty-two QED-C member companies demonstrating technologies that are already finding their way into products and systems today. The showcase followed the U.S. National Science Foundationās Quantum Research Showcase featuring NSF-funded university researchers from across the nation.
The QED-C was established through the 2018 NQI Act and is managed by SRI, a nonprofit research institute. Today, the consortium is a public-private partnership supported by the National Institute of Standards and Technology (NIST) in the U.S. Department of Commerce and other government agencies, along with more than 240 members, including corporations from startups to large tech companies, universities, and national labs.
Nvidia, the company behind the GPU market, is set to receive the Gold House A1 honor in business and technology. The company was founded as a gaming company in 1993 but has since become a leader in innovative technological advancements. Nvidia dominates the GPU market, accounting for 88% of global sales, and is the market leader in providing artificial intelligence solutions. The company has expanded its market reach by pivoting to a wide range of AI applications, including self-driving cars, facial recognition, and natural language processing. Nvidia ranks as the 20th richest person in the world on Bloomberg's Billionaires Index, with an estimated net worth of $73 billion.
The precautionary principle is a way to regulate new technologies, such as artificial intelligence (AI), to prevent harm. The principle requires that the government take preventive action in the face of uncertainty and shift the burden of proof to those who want to undertake an innovation to show that it does not cause harm. The principle holds that regulation is required whenever an activity creates a substantial possible risk to health, safety, or the environment, even if the supporting evidence is speculative. The Hawaii bill would establish an office of artificial intelligence and regulation wielding the precautionary principle that would decide when and if any new tools employing AI could be offered to consumers. The principle is difficult to apply to any technology, as it is difficult to think of any technology that could not be used to cause harm to someone. The principle is the requirement for trials without errors that amounts to the demand: "Never do anything for the first time." The principle is not without its drawbacks, as it could prevent innovation and reduce the opportunity to benefit from repeated trials. The proliferation of over 500 state AI regulation bills, such as the one in Hawaii, threatens to derail the AI revolution. The bill specifies that if someone were to use an AI model for nefarious purposes, the developer of that model could be subject to criminal penalties. This is an absurd requirement, as the creator of a model can not ensure that a model is never used to do something harmful. Instead of authorizing a new agency to implement the stultifying precautionary principle, a governance regime focused on outcomes and performance should be used. This regime treats algorithmic innovations as innocent until proven guilty and relies on actual evidence of harm. Most of the activities to which AI will be applied are currently addressed under product liability laws and other existing regulatory schemes. Proposed AI regulations are more likely to run amok than are new AI products and services.
Devika Kornbacher, co-chair of Clifford Chance's Global Tech Group, spoke to Global Finance about technology regulation, lawsuits over generative AI, and the global landscape for technology. The New York Times case is the seminal case that everyone is paying attention to, but the complaint and defenses have changed. Any foundational model needs something to be trained on to be useful, but there is the issue of intellectual property (IP), copyright, and compensation. The lack of case law affects innovation of generative AI, and there is a dissonance between case law, regulation, and innovation. The US is the Wild West when it comes to innovation, but it has some enforcement. China has shown more decisiveness in how to run the race while the rest of the world is still thinking about how to run it. Companies struggle with how they talk about the enterprise and governance framework, and whether they should have a principles-based approach for their enterprise or a detailed, step-by-step procedure for compliance. AI regulation is worthwhile, but it is difficult to develop a global solution that dictates what the whole world needs to do because every government works differently. The idea of global regulation or a treaty of some sort is not happening imminently, but there are global principles that are starting to take shape around accountability, safety, and transparency. Technology is developed regionally and even used and consumed regionally in some respects, making global regulation much more difficult. The need for regulation and more dealmaking activity is being balanced, and tech regulation is affecting merger activity.
Quantum Economy Development ConsortiumSRI Nonprofit Research InstituteNational Institute of Standards and Technology2018 NQI ActPublic-Private PartnershipCorporations from Startups to Large Tech CompaniesUniversitiesNational LabsGold House A1 HonorArtificial Intelligence RevolutionProliferation of AI RegulationsAI Regulation ThreatsGovernance Regime Focused on Outcomes and PerformanceProduct Liability LawsRegulation of Generative AIDissonance Between Case Law and Innovation