Ethical AI report released from DST

The Release of the Ethical AI Report

Artificial Intelligence (AI) has continued to prove its relevance in various subjects of life. More recently, the bearing is more noticeable in defence of how AI can improve military operations and performance while guarding all persons against internal or external threats such as environmental hazards.

However, one major deal with AI’s use is that it requires carrying out the research volume to confirm such experiments’ outcomes. Inadequate preparation may lead to negative impacts and failed research work. Also, the utility of an AI must be evaluated mainly based on time.

DST (Defence Science and Technology) Group has released an ethical AI report based on the workshop from July to August 2019. The workshop had more than a hundred participants from various organisations in attendance. In the essay written, the concept of ethical AI is narrowed to a defence setting.

The workshop’s facilitators included experts in defence, scholars, government and corporate bodies’ representatives, and the media. Hence, it uses a framework that allows the combination of classes, tutorials, and discussion groups. It was fashioned to produce practical solutions to ethical risks from AI contributions to Defence matters. The workshop explores about twenty topics on various themes in total. The issues are education command, accountability, effectiveness, transparency, integration, scope, human factor, resilience, sovereign capability, confidence, safety, test and evaluation, authority pathway, supply chain, misuse and risks, data subjects, expandability, and protected symbols.

These topics fit into five ethical AI areas: responsibility, trust, governance, traceability, and law. These five areas of ethical AI ask questions such as

  • Who is in charge of AI?
  • How is AI managed?
  • Can you trust an AI? How?
  • What is the legal implication of an AI?
  • How are the actions of AI documented?

The Defence Science and Technology group report title is “A method of Ethical AI in defence,” which contains the summary of all subjects and topics explored during the workshop. The report also offers pragmatic solutions and a framework that ensures interaction between operators, software engineers, and integrators and how it can affect AI projects in Defence groups.

Professor Tanya Monro, a defence science expert, explained the importance of AI technologies. He mentioned that they include protecting humans from environmental hazards while updating the awareness level of Australians. Hence, improvement in technology must move in tandem with people’s engagement.

Furthermore, another research work has emerged on the same topic, and it concludes that the prospects of AI technologies are being investigated by science, technology, and research.

Continuous research on the influence of AI and autonomy systems on defence has continued to uncover discoveries such as AIM command (Allied Impact command) and C2 system (control system) showcased at an event- Autonomous warrior 2018 and formation of TASCRC (Trusted Autonomous systems defence CRC).

Another fruit of the 2019 workshop is developing a workable plan that assists project managers, evaluators, and team members manage ethical risks. 

Leave a Reply

Your email address will not be published. Required fields are marked *