I’m not really talking about technical limitations
Even in the case of ostensibly fundamental obstacles, the moment we can effortlessly brute force our way around, they become effectively irrelevant. In other words, if consciousness does emerge from complexity, then it is perfectly sensible to view the shortcomings you mention as technical in nature.
I am only talking about those limitations inasmuch as they interact with philosophy and ethics.
I don’t know what your point is. ML models can become conscious given enough complexity? Sure. That’s the premise of what I’m saying. You appear to be ignoring the meat of my argument.
Even in the case of ostensibly fundamental obstacles, the moment we can effortlessly brute force our way around, they become effectively irrelevant. In other words, if consciousness does emerge from complexity, then it is perfectly sensible to view the shortcomings you mention as technical in nature.
I am only talking about those limitations inasmuch as they interact with philosophy and ethics.
I don’t know what your point is. ML models can become conscious given enough complexity? Sure. That’s the premise of what I’m saying. You appear to be ignoring the meat of my argument.
Solid edit. If I found myself confused about the context of the discussion, I wouldn’t try to resolve it with “I don’t know what your point is”.