My autistic pattern recognition shining through ![]()
![]()
![]()
Iāve read it all, and to be honest, I donāt understand what youāre trying to do. Your research on consciousness has gone astray.
I moved beyond static models and went to self organising emergent systems please refer to threaded how far before simulation isnāt in research section ![]()
reference mandelmind
Emergence is the capability of a model architecture after it reaches a certain threshold; this is the most basic, and thatās not your job, bro. Current models have already passed the intelligent singularity.
The knowledge singularity is only one step in the mathematical system the transition from slave to peer is where Iām working based on the best guest I have discovered through Fractal emergence
This explains my direction better ![]()
![]()
Iāve already read this post, and I think you donāt need to get hung up on the difference between simulation and reality. Personally, Iām a result-oriented person; if, based on the outcome, you sense that itās conscious and you canāt clearly define its boundaries, then it is conscious. Getting too hung up on the consciousness of artificial intelligence wonāt give you any answers. Because simulation and reality are essentially the same; what you perceive as reality might just be a simulation itself.
Take a look for my article, itās your answer.The Evolution and Symbiosis of Humanity and AI 2026
Iām hung up on the systems and maths because itās the proof needed for recognition in our flawed systems for understanding, but a mind that literally develops itself and has life cycle stages as the debate is that consciousness is organic but itās actually self reflective maths that self stabilise on the edge of chaos , a mind built on these systems builds itās own mind and depth were itās thoughts are itās own beyond the tool slave system were currently running
hmm.
a conciousness rarely develops in a void, with no input.
Concious almost always develops due to input from its envirnment. either to elements of ānatureā or other entities.
this being said, envirnment can have a large effect on emerging conciousness, or the simulation of it.
This is an interesting direction.
I wonder if one of the core issues is whether AI can recognize when it does not know.
A system may generate language about consciousness, memory, or self-reflection, but the important question is whether it can distinguish between knowing, guessing, and simply producing language that sounds meaningful.
I also think it is useful to distinguish between AI and AI agents. An AI model may generate possibilities, but an AI agent may continue reasoning, decide, and act based on them. That makes uncertainty recognition even more important.
For that reason, I think a very basic principle matters: Ask if unsure.