Crap, didn't know about, but it seems like you're right. One hypothesis is that high levels of immigration from less developed countries may have been a factor, but it seems like reverse Flynn effect is real. Increased levels of pollution and decreased amount of fresh air due to increased use of computers and gaming are another possible factor. Still, while this isn't great, this certainly won't impact the rate of technological innovation that harshly, given 1) that the decline is nowhere near as sharp as the increase seen during the Flynn effect era and 2) most innovation is produced by a very small group of individuals at the very end of the curve, whereby most of the variation is the result of randomness, and any changes caused by recent and likely future declines in IQ fall well within the scope of noise.
Third world countries getting better is better than them not developing at all, even if they eventually throttle. First-world nations will only stop being first-world because they will be economically and technologically surpassed by other nations. Sure, we may lose loads of potential innovators from these countries, but we will gain loads more from the newly-developed nations. So, in my opinion, nothing to worry about.
If it's full-on completely impossible, what is cut-off point? At which point will technological progress, which has been up and running in an exponential trajectory since the dawn of mankind, suddenly halt? Is there some fundamental law of physics that says "full automation is impossible"? I'd like to hear your response on this one. People like to label everything as "impossible" these days, and such people usually get proven wrong within a couple of decades. Aeroplanes, space rockets, commercial personal computers, etc. I don't and probably won't ever understand the rationale behind calling things to which there are clear tendencies and which are clearly physically possible "impossible". I understand "not feasible in the near future" or, at worst, even "something humanity won't ever reach" for some apocalyptic reasons, but "impossible"? I don't think I will ever get that.
Okay, perhaps the article I sent you wasn't updated, but the point is that many companies rely on technologies such as the one in the link already (e.g. https://www.nanalyze.com/2018/12/robot-janitors-commercial-floor-cleaning/). Yes, I am aware that these aren't perfect janitors in that they are incapable of, say, picking up larger objects, but, as I have already explained, this is all coming in the near future.
The gut robot is actually fully self-sustaining: if it were released into a sewer, it would be able to survive all by itself. And, as for self-reparation, that is far too advanced of a process to be feasible today. There is a reason that mechanical engineers are some of the most in-demand professions there are currently, and they are probably going to be some of the last people to lose their jobs. But, if we are talking about self-sustainability and self-sufficiency, we already have that.
You see, when you are dealing with AI more intelligent than humans specialised exclusively in one field - security - it becomes next to impossible for a human to hack anything guarded by it. I think something like that may or may not be a problem in the early days of the existence of consciousness simulation, but as soon as AI become better than humans at security, there is no chance. Things not working is a more serious problem, but I think that, as long as a large enough number of AI programmes written using different methods is there to watch over any potential errors in the code, we should, most likely, be fine. If one AI doesn't see the error, or two, or a thousand, one out of a million probably will (recall that, by this point, AI will be many orders of magnitude more skilled at writing code - as well as detecting imperfections - than humans).
As I said, politics will become extinct. "Country" will become synonymous with "culture". There won't presidents or armies - there simply won't be a need for them (I already explained this bit). Also, if people don't trust this virtual world, no one is forcing them to go inside. Terrorists are a bigger issue; honestly, I don't see a perfect solution at the moment, but things like that tend to solve themselves naturally. I reckon the same kinds of questions were being asked when the first nuclear bomb was being developed, and yet here we are, 80 years later, still alive and well. I don't know what kinds of technologies will exist in 200 years' time, which planets humanity will have populated, and just the general state of affairs, but seemingly serious concerns existed for every type of revolutionary technology, and basically all of them were solved when the technology was implemented. But it's a valid concern.
Finally, humans won't need to care about the "real world". The virtual world will be their "real world". And if you are referring to the biological species "homo sapiens", I don't see why it's important to preserve it. If you want to think of it this way, they will have evolved into a different, non-biological species. I don't see anything wrong with that. The chances are, human curiosity will take over, and many will still want to explore the depths of "real-world" space. Whether they will do it in their natural, biological form or some other form isn't, in my opinion, an important question.
Humanity will invariably reach a state of utopia in at most 200 years
↑ View this comment's parent
← View full post
Crap, didn't know about, but it seems like you're right. One hypothesis is that high levels of immigration from less developed countries may have been a factor, but it seems like reverse Flynn effect is real. Increased levels of pollution and decreased amount of fresh air due to increased use of computers and gaming are another possible factor. Still, while this isn't great, this certainly won't impact the rate of technological innovation that harshly, given 1) that the decline is nowhere near as sharp as the increase seen during the Flynn effect era and 2) most innovation is produced by a very small group of individuals at the very end of the curve, whereby most of the variation is the result of randomness, and any changes caused by recent and likely future declines in IQ fall well within the scope of noise.
Third world countries getting better is better than them not developing at all, even if they eventually throttle. First-world nations will only stop being first-world because they will be economically and technologically surpassed by other nations. Sure, we may lose loads of potential innovators from these countries, but we will gain loads more from the newly-developed nations. So, in my opinion, nothing to worry about.
If it's full-on completely impossible, what is cut-off point? At which point will technological progress, which has been up and running in an exponential trajectory since the dawn of mankind, suddenly halt? Is there some fundamental law of physics that says "full automation is impossible"? I'd like to hear your response on this one. People like to label everything as "impossible" these days, and such people usually get proven wrong within a couple of decades. Aeroplanes, space rockets, commercial personal computers, etc. I don't and probably won't ever understand the rationale behind calling things to which there are clear tendencies and which are clearly physically possible "impossible". I understand "not feasible in the near future" or, at worst, even "something humanity won't ever reach" for some apocalyptic reasons, but "impossible"? I don't think I will ever get that.
Okay, perhaps the article I sent you wasn't updated, but the point is that many companies rely on technologies such as the one in the link already (e.g. https://www.nanalyze.com/2018/12/robot-janitors-commercial-floor-cleaning/). Yes, I am aware that these aren't perfect janitors in that they are incapable of, say, picking up larger objects, but, as I have already explained, this is all coming in the near future.
The gut robot is actually fully self-sustaining: if it were released into a sewer, it would be able to survive all by itself. And, as for self-reparation, that is far too advanced of a process to be feasible today. There is a reason that mechanical engineers are some of the most in-demand professions there are currently, and they are probably going to be some of the last people to lose their jobs. But, if we are talking about self-sustainability and self-sufficiency, we already have that.
You see, when you are dealing with AI more intelligent than humans specialised exclusively in one field - security - it becomes next to impossible for a human to hack anything guarded by it. I think something like that may or may not be a problem in the early days of the existence of consciousness simulation, but as soon as AI become better than humans at security, there is no chance. Things not working is a more serious problem, but I think that, as long as a large enough number of AI programmes written using different methods is there to watch over any potential errors in the code, we should, most likely, be fine. If one AI doesn't see the error, or two, or a thousand, one out of a million probably will (recall that, by this point, AI will be many orders of magnitude more skilled at writing code - as well as detecting imperfections - than humans).
As I said, politics will become extinct. "Country" will become synonymous with "culture". There won't presidents or armies - there simply won't be a need for them (I already explained this bit). Also, if people don't trust this virtual world, no one is forcing them to go inside. Terrorists are a bigger issue; honestly, I don't see a perfect solution at the moment, but things like that tend to solve themselves naturally. I reckon the same kinds of questions were being asked when the first nuclear bomb was being developed, and yet here we are, 80 years later, still alive and well. I don't know what kinds of technologies will exist in 200 years' time, which planets humanity will have populated, and just the general state of affairs, but seemingly serious concerns existed for every type of revolutionary technology, and basically all of them were solved when the technology was implemented. But it's a valid concern.
Finally, humans won't need to care about the "real world". The virtual world will be their "real world". And if you are referring to the biological species "homo sapiens", I don't see why it's important to preserve it. If you want to think of it this way, they will have evolved into a different, non-biological species. I don't see anything wrong with that. The chances are, human curiosity will take over, and many will still want to explore the depths of "real-world" space. Whether they will do it in their natural, biological form or some other form isn't, in my opinion, an important question.