Once you hand over the keys to your artificial intelligence powered SaaS application how do you know whether the application will be unbiased, secure, private, ethical and fair in its use. How important these concepts are to you depends a lot on the type of application you have but all applications should incorporate at least some of these principles and it is an important topic to think about.
What does the term responsible AI mean? This Venture Beat article
explains what it is and why it is increasingly important. Another way to look at responsible AI is how certain AI providers have defined it. Microsoft defines responsible AI
as adhering to six principles. Google defines their principles of responsible AI
as objectives for application development.
Harvard Business Review's recent article on managing the risks of AI
gives a perspective on what some of the risks are and how to manage them. I was particularly interested in the implications of locked and unlocked algorithms and the advantages and disadvantages of each.
Certainly some applications have less risks and less need for responsible AI objectives than others, but there are always reputational and other risks associated with AI applications. In some cases it may be sufficient to rely on the AI principles put in place by the software tool and API providers you use or you may need to have some specific ones of your own.