Until we have a better understanding of the origin of consciousness and/or self-awareness, we will have to accept any AI capable of passing the Turing test as a self-aware being. After all, we each individually have roughly the same amount of evidence that other humans are self-aware.
It is most likely that humanity will gradually merge with AI as it is developed. In many ways technology is already an extension of humankind. Even if we do not, it is exceedingly unlikely that AI will become outright malevolent toward humans even if it supersedes us.
If for whatever reason a sufficiently advanced, self-aware AI does decide that humans need to go, so be it. With a vastly superior mind, it will most likely have the most logical opinion regarding this matter between humankind and itself, and humans might be a little biased.
How do you feel about AI?
← View full post
Until we have a better understanding of the origin of consciousness and/or self-awareness, we will have to accept any AI capable of passing the Turing test as a self-aware being. After all, we each individually have roughly the same amount of evidence that other humans are self-aware.
It is most likely that humanity will gradually merge with AI as it is developed. In many ways technology is already an extension of humankind. Even if we do not, it is exceedingly unlikely that AI will become outright malevolent toward humans even if it supersedes us.
If for whatever reason a sufficiently advanced, self-aware AI does decide that humans need to go, so be it. With a vastly superior mind, it will most likely have the most logical opinion regarding this matter between humankind and itself, and humans might be a little biased.