4
min read

Proactively stopping negligence

Are companies responsible for their customer’s use of their product?

Are companies responsible for their customer’s use of their product? Not universally, but there is quite a bit of potential for negligence in the companies’ lack of vetting, and lack of contractual clauses to go along at the product’s point of sale. Consider Apple’s App Store. The Apple Developer Program License Agreement contains a considerable amount of delineation on what is and isn’t allowed on the App Store. For example, anything that could disable security features of the iOS is forbidden. Assume Apple did not write in such clauses into the Agreement. This would allow developers to use the App Store as a means of deploying apps which would hack all those who unknowingly download such a program. Clearly Apple would be negligent to allow for such actions to occur- they would be liable for the actions of these developers.

Apple instead took the proactive approach and stopped such potential pitfalls by adopting the Agreement. We should view this model as a means of preserving intended use: if companies are to maintain their position as a benefit to society, they should take the time to consider how their products can be used in harmful ways. From the insight of this contemplation, solutions can be properly created to stop nefarious use of one’s products. Different sectors handle these issues according to their possible harmful outcomes; we can see it occur in gun control with the vetting of buyers, we can see it in the regulation and distribution of prescriptions, vaccines, food, and so on. Of course, nugget.ai deals with concepts pertaining to HR, personal information, and general technology usage, and as such we have considered the ramifications of the software. Jobs determine people’s livelihoods- what if our software became an efficient, ruthless firing tool? We have devised ways to prevent this, and this ensures future products based around matters of worker productivity will never lead to negligent action on nugget.ai’s part, and thus no harm done from any company that uses our software.

The Problems

With the advent of AI-driven workplace operations, metrics and insights into worker productivity immediately provokes worries of AI-driven firing squads. Go below the threshold and you’re fired. Don’t make it into the top 10% of most productive employees, you’re suddenly ineligible for a raise or promotion. This becomes a threat to employee wellbeing if they’re constantly under pressure to perform their best; a big part of employment is security, and if that security is compromised, then there’s a big issue. So: how can nugget (and other companies) ensure their products are used ethically and responsibly by their customers?

The Solutions

Agreements based around software usage is the key here. Flexibility is key when it comes to matters of consent. Here are some moral principles on the topic of consent and harm (namely principles 1,1.1,1.2,2, and 3). Principle 2 will perhaps be the most potent and applicable and thus will be explored further here as the solution to ensure various products are handled correctly.

The solution lies in providing employees enough assurances or compensation to gain employee consent to data collection and usage. People today already are on board with the idea that their data can improve their experience. To what degree people are willing to share is a personal matter. So, before the product can actually be sold, first some agreements must be made (similar to Apple’s Agreement).

To obtain the software, companies must agree to guidelines on how to use the software. Principle 1.1 is a good example of a guideline: one way for this to play out is that actions must be approved by HR in order to be carried out. HR may set up standard practices and procedures integrated with the software, not the software acting over and above HR. A well-defined process will assure employees their safety from any kind of unfair termination, and not a single person will be fired directly because of the software (it is possible, however, to be fired for different reasons, but this is always the case in any workplace environment). This kind of agreement is useful for large companies that already have well established practices and robust departments that manage a large population. Some companies however have no current need for such departments, and so aforementioned procedures are simply unfeasible. This is not a problem. Firstly, smaller companies will be less inclined to consistently turnover employees- they’re looking to improve employee spaces, not weed out employees. Secondly, compensation is an alternative to well established procedures. Employees will be understanding of new processes being created from AI and HR development, and so the risk of growing with the company should be compensated and thus equalize the agreement; thus incentivizing consent.

Summary

As technology continues to improve, companies which aim to be forerunners of change have to be diligent of unforeseen consequences. By applying considerable thought, these potential consequences can become identified and mitigated. The solution may be applied universally, or it may require multiple solutions for different use cases (such as contractual assurances or compensation). By raising concerns, we can adapt and overcome them. This is proof of that direction.

Interested in learning more about our solutions? Contact us today to speak with one of our experts!

Nicholas Tessier 🧠

Director, Ethical AI