Gen AI Turned 2 Years Old Nov. 22nd, 2024. How’s It Going?

Gen Ai tuned 2 years old Nov 22nd, 2024.  How's it going?

Happy Anniversary GEN AI!

Two years ago today, we awoke to the marvels of generative AI. Over that time, we have come to understand that along with the many marvels, come many risks. Time and again, we are slowing our enthusiasm with new insights into the necessity of keeping humans in the loop of AI performance. Relying on the benefits of this probabilistic computational tool calls for the exercise of human learning, experiences, discretion, and intuition.

Accepting AI output at face value as the final authority in the absence of human judgment can lead to unintended and unfortunate outcomes.

Massachusetts Institute of Technology reports that the impact of decision-making regarding employee performance which is purely data driven and influenced by ChatGPT is less than desirable. A "rigid and mechanistic approach" to workplace evaluations returns management to a long discredited command and control mentality that fails to consider the many variables in human performance.

As appealing as it may seem, MIT research reveals that performance viewed without the factual context and application of human experience leads to inadequate personnel decisions: 

When managers consulted with ChatGPT before proposing solutions, the tool intensified their focus on control and surveillance-based solutions, often at the expense of [employee] autonomy and well-being. Specifically, managers who engaged with ChatGPT were about two times more likely to propose control-based solutions, such as punishing [employees] for not using the tracking app, adding monitoring cameras, hiring external auditors to ensure" compliance, and encouraging peer reporting of transgressions.

An experienced labor and employment attorney can attest to innumerable examples of the workplace chaos, risks, and liabilities that result from Gestapo-style management.

ChatGPT-style Human Resources will undo decades of enlightened personnel management learning without human review and judgment.

As with all AI applications, humans in the loop exercising their experience, perspective, and discretion are essential to help ensure justice, fairness, and empathy in decision making.

Our primary goal at Guardrail Technologies is to help keep humans in the loop of AI.

 

 

Larry Bridgesmith J.D.

Executive Director Guardrail Technologies and Associate Professor Vanderbilt Law School

Larry brings consulting and training at the level of the Internet through emerging technologies such as blockchain, smart contracts, artificial intelligence, cryptocurrency and interoperable functionality.

LinkedIn Profile

https://www.linkedin.com/in/larrybridgesmith/
Previous
Previous

Impact of Remote Work on Data Privacy & Video Conferencing: Challenges and Solutions for Today’s Remote-First Workspace.

Next
Next

AI in Education: Opportunities and Risks