Regulate or not?
How? Who?

Regulate or not? This is the main question that one asks when talking about AI. Most people seem to agree with the idea of regulation (for the sake of a safer and more ethical future society). However, AI is a technical topic (not accessible for the general public) and a State intervention (top-down) might block innovation in the field.

Regarding the latter, there are two main issues to be considered:

- In Swiss law, the principle of tech neutrality reigns. This means that Swiss legislators adopt rules as neutral as possible regarding the underlying technologies in order to cover as much of the future innovations in the related fields. This approach is linked to the heavy legislative & democratic procedure of Switzerland.

- When passing laws (or ordinances), these are strongly influenced and “guided” by the political positioning of the people who sit within the Parliament and/or Government at that time. Moreover, it is not fair to expect that the members of the Swiss Parliament and/or the Federal Council understand completely the AI industry, the technology and the needs of stakeholders (which might even change over time). Therefore, a State intervention in the regulation of AI might not truly answer the needs of the real stakeholders.

Self-regulation is a well-known concept in Switzerland (bottom-up). The financial sector is a good example. In fact, the laws of the State can oblige certain actors to self-regulate themselves by imposing a general framework without bothering the legislator with the details. Then, this self-regulation could be subject to approval by a certain state department / public authority (e.g., in the financial sector: FINMA). This would make the process of regulation more efficient and adaptable if needed, and the outcomes would be closer to the reality.

Self-regulation can be achieved in two ways:

- Either impose such self-regulation to certain sectors only. In principle, most industries, in which AI use is a sensible issue (e.g., financial sector, medical sector, food industry, etc.), do already have their “professional associations” that “impose” minimum standards (and code of conducts with sometimes even penalties for breaches) and take collective actions. See for example the Swiss Banking Association or the Swiss Fund Managers Association. Each industry could then come up with their “code of conduct” for the AI use in their respective field.

Or

- encourage the creation of (give official State mandate to) an entity framing the AI technology for all its use cases. This entity could have a general body, in which issues of general interest are discussed and self-regulated, and special sector specific committees. The main advantage of this entity would be the facilitation of coordination and of exchange (bringing together knowhow of different industries and aligning technological growth with decision-making processes), and the supportive role it plays (e.g., employ lawyers that help sector specific committees in the formulation of their commitments / codes of conduct / minimum standards). A central entity would also have the advantage that it can take unified positions when it comes to consultation processes of legislative projects (e.g., revision of the Swiss Data Protection Act). In the end, this entity would be the reference and contact point for all state authorities. This entity could also manage the “Swissness” label by fixing the requirements for AI use cases (together with the Swiss Intellectual Property Institute), by granting the label to stakeholders who fulfill such requirements, and by supervising label holders to ensure compliance. Finally, the creation of such an entity (together with its’ label) would give a clear signal to the world that Switzerland is a key player in the field of AI.