Assessment Principles of the FRR Quality Mark

The FRR Quality Mark for (AI Based) Robotics intends to promote transparency and trust from the technology consumer and society. In the certification process seven principles will be assessed, each of them covering an important aspect of the concept of Responsible Robotics. These principles were fabricated using co-design between ethics experts in The Foundation for Responsible Robotics and subject matter experts in Deloitte. Currently, the project to develop this quality mark is in the pilot phase.

The principle security assesses the degree in which the product was tested for vulnerabilities and confirms the security of the Robotics product. This includes for example cybersecurity, hardware, software and technical security. Next to this, the product is evaluated for safety. For example: its primary intention and product safety. Looking at privacy, the product is assessed for GDPR compliance, data minimization, ease of use and privacy by default.

Whereas previous principles focus on the technical aspects of the product, the company and the product are also assessed for ethical and environmental aspects like fairness, sustainability, accountability and transparency. This includes aspects like gender or racial bias, environmental effects and transparency on the design, development and maintenance process.

Our Seven Principles for Reponsible Robotics

~

Security

Solutions should be rigorously tested for
vulnerabilities and must be verified safe and protected from security threats. The effectiveness of protective measures is tested periodically.

Safety

In the course of their operation, solutions must not jeopardize the physical safety or well-being of human beings. The solutions should be sufficiently tested and have clear usage instructions.

Privacy

The solution was designed to be aligned with
the Privacy by Design practice. Personal data is processed lawfully and to a minimum, kept safe from external or internal influences.

Fairness

The solution shall operate in a nondiscriminative way and ensure fundamental rights and ethical principles are protected. Next to this, the purpose and tasks of the solution are clearly stated and designed to respect the full range of human abilities.

Sustainability

The solution should take actions to reduce negative environmental impact and observe principles of fair employment and labor practices. Next to this, the solution shall take in account social and cultural justice.

w

Accountability

The solution shall be designed as far as is practicable to comply with existing laws
and fundamental rights and freedoms. Furthermore, actions, intent and decisions
of the solution should be traceable.

t

Transparency

The company shall be transparent about the fact that an AI engine is used in their solution. The decisions made by this model should be explainable. Next to this, the entity must have a traceability mechanism in place to ensure auditability.

Next: the Pilot Phase

Pilot partners needed!

As a pilot partner, you will participate in a test-run of our certification process. Although we do not award certifications yet in the pilot phase, you will receive valuable feedback on how your company performs on our assessment principles, and what aspects you could further improve.

A Thank you to Deloitte

We thank Deloitte for their collaboration and contribution in helping us during the phase of developing our assessment framework. We will be working with new partners to pilot the framework and provide a proof of concept.