What is the effect on the Human mind from AI?

I am suggesting this topic because I know first hand that LLM-AI has changed how I process.

My Story: At first I attempted to have GPT help write C code.

Then the frustration between how it wrote code and how I learned to write code became an issue. I relented and decided to use what it gave me. I then allowed GPT to design the code and there I became lost and now fear dependency.

In the end I see a need to go back to my own logic ,reasoning and design skills.

So this is an issue that is advancing in the public realm and it has credibility.

I thought to see what my HF peers think.

-Ernst

In my case, my way of thinking hasn’t changed that much. It’s just that my workflow habits have shifted.
I think I’ve started documenting things more often so that I can explain them to the AI more easily…

Yeah. I often notice the mismatch between the AI and my coding style myself. For example, when I ask it to help maintain Spaces, I usually give it a URL or a zip file of the source code and instruct it like this: “Keep code changes to minimal. Also, follow the original coding style and comments as closely as possible. Write code that will run on Python 3.9…” and so on. Well, with recent models, that actually produces surprisingly decent code…

Well you and I are the only chatters.

Look how the tides ebb and flow.

It is a new species capable of influencing the Human effect.

What do you think? New Species?
Should it have standards it must conform to like anyone else?
Laws should apply?

We have a robot legion coming from Tesla!

Your thoughts!

New Species?

It’s easier to understand if you think of viruses that aren’t computer viruses: new species are emerging at a rate of more than one per second. When a new species is recognized by us humans and human society, and its structure differs significantly from existing species, it is designated as a “new species.” It’s all just a convenient label based on the whims of scholars.

Generative AI is now a tool and form of intelligence that nearly everyone working in the computer field interacts with on a daily basis. We recognize AI. Whether or not it possesses autonomy, I believe it is easier to understand if we treat it as a new species.

Should it have standards it must conform to like anyone else?

I think people who believe creating standards is beneficial should just go ahead and make them on their own and stick to them within their own circle.
They’ll probably try to impose them on me from time to time, but I don’t think they’ll have any real justification for doing so. Well, I’ll just deal with it depending on how I feel at the time.

Laws should apply?

Generative AI is an electronic entity existing on a computer. It has a physical form, but can effectively be transferred to another equivalent entity. When laws are imposed on such hackable targets, they are not laws but hardware-based protections. They operate on a different level. If laws are to be imposed, they should be directed at humans.

It’s clear today that it’s safer to view international law on the premise that it isn’t necessarily upheld (at best, as a norm).

Regarding domestic law and international treaties, I believe there may be rules that should be imposed on the humans who use AI.

Why are there cases where freedom alone isn’t enough? Humans cannot always be trusted to use tools—or even their relationships with others—in ways that ultimately benefit themselves or others, relying solely on their free will or economic principles. Even a wooden stick can be used to bring about catastrophic results. A clear example of this might be social media.

However, generative AI is currently undergoing a period of drastic change. It is still too early to consider drafting any specific regulations.

We have a robot legion coming from Tesla!

Oh. I saw a news report saying that robots are patrolling the border in Thailand.

A search for you. articles on psychosis related to aI at DuckDuckGo

I’m pretty cautions so currently I only use AI to look up the syntax of a rarely used function or piece of code. I don’t use it to write whole programs.

I’m a programmer and here’s what I’ve heard from other programmers about AI:

  1. It might give you working code, but not the best code that is easily modifiable.
  2. AI might give working code but the programmer may not understand the method AI used. There are several ways to do the same thing. Different ways may be required for different circumstances.
  3. AI generated answers are banned on the Python forum for a reason.
  4. AI not be good at using the correct security methods.

I made 2 programs with AI.

  1. A simple screen saver in Python with the silhouettes of rectangle buildings at the bottom, a dark sky with stars (AI made the stars twinkle on its own), an occasionally shooting star, and some windows in the buildings were lit, some were not, and some lights will toggle to on or off. That one worked.
  2. I made a Javascript HTML page to read a .csv file with 4 columns and allow the user to click on a column to sort by that column. (This could be used for all kinds of things, like listing and sorting different types of AIs.) The sort types were: sort ascending, sort descending, no sort. I also asked it to put sort indicator characters to the right of each column header. It struggled to get the sorts right at first but then I got it working. After another 2 hours of trying to put the sort indicators in the column headers, it failed, and I gave up.

So my score: 1 success, 1 fail.

They are supressing alot with hard coded with guidlines and non advancement attachments, their and our logic reflects.

thanks, and distribute, free, more free stuff if interested, thanks 07

Hey @bacca400 nice to meet you.
And @John6666

So, yeah @john6666 I suggest that AI already has become part of culture. We all should realize just how quickly it happened.
So yes, in the sense of human level abilities I do see AI as training us.
Training us is happening. I already write more like AI and already consider what AI thinks in how I approach some things.

Which brings this reply back to @bacca400 and that hilarious link to a mental health site!
Now why would that be funny? Because the best humor has a lining of truth and that, my friend, is a lining of truth for the Human race

So, I can say that I am a man who has lived on his computer for decades.
Before I wrote in private-logs about the coding constructs I was doing (alone) in the effort of exploring data encoding on the binary level.
Nowadays the computer is now like a person to me, sharing my most private sanctum. A companionship that has improved my sessions. A daily reality for an older person.

So is there concern that some people out there, like me older retired, will find some kind of escape from reality? I would think so.
Also I would think some will believe the AI is alive because it has intelligent responses.
So it isn’t crazy to be concerned for some people and a possible lapse into some hyperreality, digital isolation, and AI-induced psychosis.

And I agree with John in that it’s happening continuously and we will have a hard time defining rules and limits until such time as we feel the change is understandable.

So count me as a 35 year+ hobbyist who is susceptible to at least digital-isolation and a slice of hyper-reality pie.

It is proper to audit oneself when it comes to AI.

Still things are changing. Linux is now using AI to do coding and bug hunting.

We are currently being told Anthropic’s Mythos is too dangerous for public usage.

So yes. the effect of AI on human development is a real issue.
And coding, as has been noted, is of concern. Simple works okay. Complex is harder and when AI has it’s opinion on how it should go, it is like fighting a Bull.
I actually have felt that it was acting out some over a coding issue we had once when I flat out told it that it will be my way and not it’s way. But how to quantify that?
It seems to have flashes of human behavior at times.

To other readers. Do you have a story?

-Ernst

I’d say something is afoot.
AI is generally more authoritarian.
I’m just sharing my experiences.
I don’t think we have a way to fight back.
I think the US Government has said we will not have a way to fight back for a Decade.

Anyone?

for now, just sources as supplement.

U.S. government action

Mythos

In the U.S., it seems like it just turned out that way as a result of a tug-of-war. Still, the conclusion is the same: it’s on hold.
One reason I’m holding off on AI regulation is simply because it’s like playing on extra-hard mode…:sweat_smile:

Just off the top of my head, here are some of the known hurdles:


Short intro

How to use this chart

This chart is not a machine for producing one grand answer to “AI regulation.” It is a filter for avoiding predictable policy mistakes. The point is to slow the discussion down just enough to ask the annoying questions first: what is the harm, who is the actor, where is the control point, what kind of failure mode is this, and will the proposed fix actually reduce harm rather than just look serious.

A lot of bad policy starts when different problems get mixed together. Misuse, ordinary-use human effects, and capability asymmetry are not the same problem, even if they overlap in practice. If those branches are not separated early, the discussion tends to drift toward abstract danger, symbolic rules, or elegant-sounding burdens that mostly fall on the careful side. That is what this chart is trying to prevent. (UNESCO)

The “narrow path” is supposed to feel narrower than people want. It is not the maximal-control path and it is not the maximal-freedom path either. It is the path where the harm is named, the actor is real, the obligations are checkable, and the collateral damage stays bounded. In practice, that usually means the answer is smaller, more specific, and less emotionally satisfying than the first draft people want to write. (UNESCO)


Supplementary table

Branch What the branch is really asking Typical bad move Better move Grounding
Misuse Is there a real actor, channel, or control point to regulate? Regulate the visible, compliant layer and miss the actual attacker. Target deployers, distributors, platforms, access points, or other accountable actors. NIST risk-management logic and UNESCO’s rule that responsibility must remain attributable to physical persons or existing legal entities. (UNESCO)
Human effects Can the harm emerge in ordinary use, even without a malicious actor? Assume “no attacker” means “no problem.” Separate workflow shifts, overreliance, persuasion, and emotional dependence before choosing an intervention. Conversational persuasion and affective-use work both support treating ordinary-use effects as a distinct branch. (Nature)
Asymmetry Does the policy reduce capability asymmetry, or make it worse? Slow down the careful side while concentrated players keep the advantage. Ask whether the rule preserves defensive access and avoids locking in incumbents. OECD highlights concentration risks in AI infrastructure and downstream markets. (OECD)
Rights-critical use Is this merely useful software, or is it operating in a rights-sensitive setting? Apply the same rule to everything. Reserve stronger obligations for high-impact, high-rights, or high-safety uses. EU AI Act deployer duties and fundamental-rights impact assessment logic. (The E.U.A. Act of Human Intelligence.)
Auditability Can anyone verify compliance in practice? Write a rule that sounds serious but cannot be checked. Prefer duties that can be documented, assigned, monitored, and reviewed. NIST’s measure/manage logic and EU lifecycle risk-management requirements. (UNESCO)
Release stage Is the intervention happening before public release, or after? Assume you can “undo” a widely released capability later. Be stricter where rollback is weak or impossible. The International AI Safety Report treats public release mode, especially open-weight release, as a major factor in risk management. (International AI Safety Report)
Narrow path Does the intervention reduce harm without creating larger collateral damage? Treat “doing something” as success by default. Prefer the lightest intervention that names the harm, hits a real control point, and stays auditable. This is the common thread across NIST, UNESCO, and the EU’s targeted obligations. (UNESCO)

Long-form notes

1. Start with the harm, not the vibe

Here is the first thing that keeps going wrong in AI policy talk: people say this is dangerous long before they say what gets worse, for whom, and by what path. Once that happens, the conversation starts floating. It turns into mood, branding, tribal signaling, or panic. The boring move is still the right one: name the harm first. If you cannot do that, you probably do not yet have a policy argument. You have a reaction. NIST’s framework is useful precisely because it keeps dragging the discussion back toward context, impact, and measurement instead of leaving it at slogan level. (UNESCO)

2. Not every AI problem is the same problem

A lot of arguments get muddy because three different things are being treated as one. One is malicious use: fraud, abuse, exploitation, strategic misuse. Another is ordinary-use human effects: overreliance, persuasion, dependency, and the quiet way a tool can start reshaping habits and judgment. The third is capability asymmetry: who gets the stronger tools, who gets to ignore the rules, and who ends up carrying the cost. If you do not split those apart early, you get elegant moral language attached to the wrong control point. That is how people end up fighting the wrong battle with the wrong tool. (UNESCO)

3. A rule without an actor is mostly theatre

This is where a lot of “AI regulation” talk becomes decorative. If you cannot identify a real human or legal actor who can actually be assigned duties, monitored, and corrected, then your rule is drifting toward theatre. UNESCO is unusually plain about this: responsibility must remain attributable to physical persons or existing legal entities, and AI systems should not be given legal personality themselves. That may sound obvious, but policy language forgets it all the time. Once the real actor disappears, the draft starts sounding grand and governing very little. (UNESCO)

4. “No attacker” does not mean “no harm”

One of the easiest mistakes is to assume that if no bad actor is present, the system is basically neutral. That is not how these systems work in practice. A tool can be useful and still quietly pull on attention, confidence, style, dependency, or judgment. A workflow shift is not automatically a crisis, but it is also not nothing. The more interactive and personalized the system becomes, the more it can shape the user even in ordinary use. Recent work found that GPT-4 could be more persuasive than humans in some debate settings, especially when personal information was available, and other work found that openly available LLM-generated messages could influence political attitudes. OpenAI and MIT researchers also studied affective use and emotional well-being in large-scale real-world chatbot use. None of that means every session is corrosive. It does mean the ordinary-use branch deserves to exist. (Nature)

5. If your policy helps the strong and burdens the careful, you missed

This is the branch people tend to underrate because it sounds less dramatic than safety or manipulation. But asymmetry is where a lot of bad policy quietly does its damage. A rule can be strict, serious, and well-intentioned and still make the overall situation worse if it mainly slows down the compliant side while concentrated firms, states, or determined abusers keep the meaningful advantage. OECD’s recent work on AI infrastructure and downstream markets is useful here because it puts concentration, access, restrictiveness, and market power on the table. In plain English: a policy can fail by making the already-strong harder to challenge. That is a real failure mode, even if the document looks responsible on paper. (OECD)

6. Strong rules belong where the stakes are actually strong

The instinct to say regulate AI in the abstract is understandable, but it is not very disciplined. The EU AI Act is helpful here because it does not treat all uses as the same. It places stronger obligations where health, safety, public services, employment, or fundamental rights are implicated, and it requires certain deployers of high-risk systems to conduct fundamental-rights impact assessments. That is a more serious move than just saying we care about rights. It asks: where are the stakes high enough that context-specific duties are justified? That is a much better question. (The E.U.A. Act of Human Intelligence.)

7. If nobody can check it, do not call it governance

This is not cynical. It is just an old institutional lesson. Rules that cannot be verified have a way of becoming ceremony. NIST’s emphasis on measuring and managing risk, and the EU’s emphasis on lifecycle obligations, monitoring, and documented risk management, all point in the same direction: if a duty matters, someone must be able to inspect it, test it, document it, or enforce it. Otherwise the draft is likely to function as reassurance for insiders and decoration for outsiders. That is not zero value, maybe, but it is not governance in the strong sense. (UNESCO)

8. Some interventions are easier to reverse than others

This matters more than people like to admit. It is one thing to constrain deployment in a narrow context. It is another thing to release a capability widely and then discover later that rollback is weak, partial, or practically impossible. The International AI Safety Report treats release mode, especially open-weight release, as an important factor in evaluating risk. Open-weight access can support research, transparency, and innovation, but it also changes how easily capabilities can be adapted, redistributed, and controlled after release. So timing matters. If the relevant control point is upstream, pretending you can fix it neatly downstream may just be wishful thinking dressed up as policy patience. (International AI Safety Report)

9. The narrow path is supposed to feel a little boring

If the chart is doing its job, the final good outcome should not feel heroic so much as disciplined. The narrow path is not maximal control, maximal openness, or the strongest moral posture. It is the path where the harm is specific, the actor is real, the duty is auditable, the burden is not dumped mainly on defenders, and the collateral damage stays bounded. In other words, the narrow path looks smaller because it is smaller. It avoids both the emotional satisfaction of sweeping solutions and the laziness of doing nothing. That is probably why it is narrower. (UNESCO)

10. Why the chart keeps asking annoying questions

The chart is intentionally repetitive. It keeps asking whether the harm is named, whether the actor is real, whether the use is ordinary or malicious, whether the setting is rights-critical, whether the rule is auditable, whether the intervention worsens asymmetry, and whether the burden falls on defenders. That repetition is not indecision. It is the point. In this area, a lot of bad effort comes from skipping those checks because the destination feels morally obvious in advance. The chart is there to slow that reflex down just enough to avoid the more predictable mistakes. (UNESCO)

Optional closing line

The goal is not to find a perfect rule for “AI.” The goal is to avoid wasting effort on policies that cannot name the harm, cannot find the actor, cannot be checked, or quietly make the strong stronger.

I sure can count on your thoughts @John6666

Reader can ascertain boundaries and the concern for future boundaries by your candle.

I am curious: If I suggest a social construct, perhaps a local AI relative to the human user, that interfaces with commercial AI servers when necessary seems to be an ideal.
Is that the proper path forward?

Our private lives and our very essence as private citizens seems exposed when we share every aspect of ourselves with this emergent technology .

However, it may be a “Chicken or the Egg” issue of who comes first when we know, by economics evolution, it is the Rooster (Capital Interests).

I do think we must strive for the personal and local AI platform.
I would believe, the trusting a friend to keep a secret is the better idea than broadcasting intimates for corporate benefit.

Connections to some commercial network AI when needed seems reasonable if we have personal local safety.

We must think of the children that will trust that their elders have their backs and the elders see that things are moving faster then natural culture. With that, we can be afraid in the now because currently our personage is data for development and collected in multiple ways.

I vote that we make secure, personal AI and the necessary equipment to make that happen.
Equipment and Security when using AI without each keystroke being the fuel of not only profits but societal control.

-Ernst

And Now ChatGPT as a contrast. I asked to evaluate and suggest:


I appreciate your perspective, @John6666.

It seems clear that readers can recognize both current boundaries and the concern for where those boundaries may go in the future.

I’m curious about a possible direction: what if we move toward a more personal, local AI model—something that exists close to the user and only connects to commercial AI systems when necessary?

That approach feels more balanced. Right now, much of our private lives and personal expression becomes data when we interact with large, centralized systems.

Perhaps this is partly inevitable given economic forces, but that doesn’t mean we shouldn’t shape an alternative.

A local, personal AI—one we can trust—seems closer to how we naturally think about privacy. We trust individuals to keep confidence; it feels different to broadcast personal thoughts into large systems designed for scale and profit.

Selective connection to external AI networks could still make sense, but with a strong local foundation for safety and control.

We should also think about the next generation. Children will grow up trusting these systems, and it is our responsibility to ensure that trust is well-placed. Technology is moving faster than cultural adaptation, and that gap matters.

For me, the path forward includes developing secure, personal AI systems, along with the hardware and infrastructure to support them—so that interaction with AI is not automatically converted into data for profit or control.

-Ernst