Human-reliant AI

Lunge Systems
1 min readDec 1, 2021

It is prudent to be worried whether the “creator’s intent” would be preserved as AI models and algorithms learn on their own. What if we develop AI based not on the models (or historical data), but the evidence? The kind of approach which is used in international development / impact evaluation and drug testing.

We at Lunge Systems are trying to create such an AI system using actual action-outcome linkages, which auto-updates itself as the evidence shows any causality changes or as the target impact outcomes change over time.

This approach offers a few crucial benefits compared to the current approach of relying on ethical (e.g. “first, do no harm”) rules for AI creators.
1) AI will change its recommendations/ actions, if the impact is not positive.
2) Even in the case when it becomes too much of a good thing, we can still change the AI behavior by controlling the measured outcome which is driving the AI.

By not relying on “benevolence” of the software programmers and by using the technology itself as an antidote for expected obsolescence of original AI models due to humanity’s changing needs, we can ensure that AI remains safe and useful.

--

--

Lunge Systems

We are a technology startup enabling people to take control of their lives. We help organization increase their efficiencies through AI-driven optimization.