Humanity will invariably reach a state of utopia in at most 200 years

You are viewing a single comment's thread.

↑ View this comment's parent

← View full post
Comments ( 5 ) Sort: best | oldest
  • "Sustaining itself" isn't everything. Basically all first-world countries will still be around in 100 years, and they will still have enough population to remain economic powerhouses right until full automation. Also, more people only means even faster progress, so third-world countries growing in population will only speed up our journey to utopia. China being authoritarian isn't a problem, either, as politics will become extinct when all human jobs are automated. Too bad for Chinese citizens (although I'm sure they would prefer living in an authoritarian first-world country than in a libertarian third-world country), but everyone else won't be affected.

    Comment Hidden ( show )
      -
    • Sustainability is the foundation of any system. If current trends continue then not all first world countries will have the population to remain economy powerhouses; if trends continue then the average IQ will fall to a level that is incapable of producing great technological advances

      Why would politics become extinct? People still need to survive, they will still need protection and certain functions of a ruling body. War has always been around and always will be around; all you would need is for one nation to hold to war and the system falls apart. Also, I can't see complete, full, automation ever really happening; there are far too many variables. We can't even produce a piece of fully automated technology capable of keeping the streets clean or picking up even our household rubbish for us.

      Is there even a single piece of fully automated technology around in the world that's capable of sustaining itself?

      Even if it was possible to produce fully automated technology for every conceivable occurrence that may arise for man's life and satisfaction then you're dealing with a literal universe of variables which could upset it, many of which won't even be known at the time of creating it.

      Comment Hidden ( show )
        -
      • I am not sure what trends you are referring to, but if we go by economic growth, leading economies will remain leading economies in the future - just the margin between them and other countries will become smaller, but that's a good thing because that only means that the world is becoming more developed as a whole. You see, human civilisation isn't a zero-sum game. Other countries becoming more advanced doesn't mean already developed nations somehow lose their level of development. And their native population may decrease by 5%-10% in the next few decades to a century, but that isn't near enough to hinder their economy.

        Also, current trends actually show a steady increase in global IQ scores, and less developed countries growing in population won't change that: as underdeveloped countries grow in size, they also become more developed in the process, granting them with better nutrition, which is a significant contributing factor to intelligence. While genetic factors may also play a role in intelligence, factors such as nutrition play a much bigger role (as demonstrated comprehensively by the Flynn effect).

        Politics won't be needed when money is extinct. When full automation is reached, the same resources will be available to everybody, so there won't be any need for war - every human on Earth will already have the most that he can possibly have. And, if we take that one step further into simulated reality, it would be impossible for someone to want something and not have it - all their wants can be satisfied instantaneously through code (which will probably be done automatically through mind-reading). So war won't really achieve anything either way.

        I think you are being too close-minded here. Robot-janitors are already a thing (https://www.cleanlink.com/news/article/Amazon-Explores-Replacing-Janitors-With-Robots--20932). The reason why we don't see these guys taking care of our roads is that doing so requires a high degree of navigation autonomy, as well as scaling of obstacles such as stairs. However, both of these fields are seeing rapid development (e.g. autonomous cars and Spot Mini robot) and will probably reach full automation within the next 10-15 years, possibly becoming mainstream in 20 years. Robots picking up household rubbish face the same problems. But, let me tell you: when these problems are addressed, it's not just cleaning that will be automated - our entire household will be automated, from cooking, to serving food, to buying food, to laundry, to clothes ironing etc. In addition, cars and transport will be fully autonomous as well.

        Regarding full automation, you can read my reply to Mammal Lover. Full automation is pretty much inevitable and will in all likelihood happen this century or early next century. All the data points in this direction. I understand that this is tough to imagine and seems very far-fetched, but the reason that it does is that our intuition tends to extrapolate linearly, while progress is not even exponential - it's super-exponential. This means that the next 100 years will actually see 20,000 years' worth of modern-day progress. If even this thought fails your intuition, think of this: what would people in the 70s think of you if you told that, 40 years later, they would be able to access the sum of human knowledge just by asking the question they wanted answered, or order any product on Earth just by asking for it, that they would carry supercomputers many times more powerful than the most powerful contemporary supercomputer in their pockets, that computer games would have open worlds the size of cities with graphics that are close to indistinguishable from real life, etc? That's right, they would probably tell you what you are telling me now: none of this will ever happen, not in 40 years, and not in 200 years. And yet here we are. Think about it - the speed of technological progress is scary, for better or for worse.

        Yes, there are indeed fully self-sustaining automated systems (https://www.popsci.com/technology/article/2010-07/diarrhea-bot-sporting-artificial-gut-eats-excretes-all-itself/) out there right now. Sure, they are currently very primitive, because self-sustainment is very hard, but this will be standard when AI becomes more advanced and more general.

        I have actually been thinking about the dangers of living in a digital world. For example, disruptions in code could result in massive suffering. This problem in particular can be resolved by mechanically shutting down the digital consciousness as soon as some level of pain is reached, or by not allowing the sensation of pain past a certain level at all. I know this sounds simplistic, but our civilisation will be so advanced by then that a working solution will certainly be found. What other problems can arise? I know that some of them are unknown unknowns, but can you give me an example of what sort of thing you are talking about?

        Comment Hidden ( show )
          -
        • You can't sustain that economic growth if your populous shrinks and becomes stupid. IQ is falling is most developed nations; the Flynn affect has been dead for ages in the West. Flynn himself has literally completed research showing a falling level of IQ in first world countries. And basically any nation that becomes first world can't sustain even a replacement level birth rate. What good is it if third world countries get better if they throttle once they hit a certain stage?

          Self-sufficient, full-automation is likely completely impossible, dude. The data does not point in the direction that full automation is possible or even that a single thing has accomplished it; the data points in the direction that there are gains in technology which have massively increased in the modern world. The the amazon article you showed in absolutely no way shows that those robot janitors are a thing. It straight up says in the article "patents and patent applications don’t guarantee the idea will become reality", and the article you shared about the robotic gut is not self-sufficient and sustaining. It can't create more of itself It can't repair itself. Hell, it can't even find its own food source, it has to be provided with one. It doesn't even have programming remotely concerned with the concept of the basic materials needed to fix itself.

          If you don't force everyone to live in a virtual world full time then you'll have people searching to destroy it just to name a single factor. All you need to do is to get them to hack into a single section of AI and it's all over. How are you going to have a self-sufficient piece of technology when everything's going to shit because they've shut off pretty much any single piece of AI which is required to keep the whole system functioning and your human populous has been catacombed in some AI world without the knowledge to deal with these things for the last 50 years; or a single piece of the code goes wrong, things stop working, leading to more things that stop working, and a populous with not even the knowledge to create a piece of wire is pushed into the real world?

          How are you going to deal with climate issues you never knew about at the time? How are you going to fight off countries that don't want to live in a virtual world and are actually creating arms and weapons to take out the machines, or terrorists; all they need to do is find a single weak spot in the programming and defense system to exploit it. How are you going to keep humans even around as a species to enjoy this future if they're so wrapped up in this brilliant virtual world that they don't care about the real one. And if you artificially breed them, then how are you going to programme into the algorithm the evolution that will happen to those humans when evolution is literally cause by random mutation.

          There's not a single piece of technology that has been made that's self-sufficient and fully-automated, and we're not even close to creating one.

          Comment Hidden ( show )
            -
          • Crap, didn't know about, but it seems like you're right. One hypothesis is that high levels of immigration from less developed countries may have been a factor, but it seems like reverse Flynn effect is real. Increased levels of pollution and decreased amount of fresh air due to increased use of computers and gaming are another possible factor. Still, while this isn't great, this certainly won't impact the rate of technological innovation that harshly, given 1) that the decline is nowhere near as sharp as the increase seen during the Flynn effect era and 2) most innovation is produced by a very small group of individuals at the very end of the curve, whereby most of the variation is the result of randomness, and any changes caused by recent and likely future declines in IQ fall well within the scope of noise.

            Third world countries getting better is better than them not developing at all, even if they eventually throttle. First-world nations will only stop being first-world because they will be economically and technologically surpassed by other nations. Sure, we may lose loads of potential innovators from these countries, but we will gain loads more from the newly-developed nations. So, in my opinion, nothing to worry about.

            If it's full-on completely impossible, what is cut-off point? At which point will technological progress, which has been up and running in an exponential trajectory since the dawn of mankind, suddenly halt? Is there some fundamental law of physics that says "full automation is impossible"? I'd like to hear your response on this one. People like to label everything as "impossible" these days, and such people usually get proven wrong within a couple of decades. Aeroplanes, space rockets, commercial personal computers, etc. I don't and probably won't ever understand the rationale behind calling things to which there are clear tendencies and which are clearly physically possible "impossible". I understand "not feasible in the near future" or, at worst, even "something humanity won't ever reach" for some apocalyptic reasons, but "impossible"? I don't think I will ever get that.

            Okay, perhaps the article I sent you wasn't updated, but the point is that many companies rely on technologies such as the one in the link already (e.g. https://www.nanalyze.com/2018/12/robot-janitors-commercial-floor-cleaning/). Yes, I am aware that these aren't perfect janitors in that they are incapable of, say, picking up larger objects, but, as I have already explained, this is all coming in the near future.

            The gut robot is actually fully self-sustaining: if it were released into a sewer, it would be able to survive all by itself. And, as for self-reparation, that is far too advanced of a process to be feasible today. There is a reason that mechanical engineers are some of the most in-demand professions there are currently, and they are probably going to be some of the last people to lose their jobs. But, if we are talking about self-sustainability and self-sufficiency, we already have that.

            You see, when you are dealing with AI more intelligent than humans specialised exclusively in one field - security - it becomes next to impossible for a human to hack anything guarded by it. I think something like that may or may not be a problem in the early days of the existence of consciousness simulation, but as soon as AI become better than humans at security, there is no chance. Things not working is a more serious problem, but I think that, as long as a large enough number of AI programmes written using different methods is there to watch over any potential errors in the code, we should, most likely, be fine. If one AI doesn't see the error, or two, or a thousand, one out of a million probably will (recall that, by this point, AI will be many orders of magnitude more skilled at writing code - as well as detecting imperfections - than humans).

            As I said, politics will become extinct. "Country" will become synonymous with "culture". There won't presidents or armies - there simply won't be a need for them (I already explained this bit). Also, if people don't trust this virtual world, no one is forcing them to go inside. Terrorists are a bigger issue; honestly, I don't see a perfect solution at the moment, but things like that tend to solve themselves naturally. I reckon the same kinds of questions were being asked when the first nuclear bomb was being developed, and yet here we are, 80 years later, still alive and well. I don't know what kinds of technologies will exist in 200 years' time, which planets humanity will have populated, and just the general state of affairs, but seemingly serious concerns existed for every type of revolutionary technology, and basically all of them were solved when the technology was implemented. But it's a valid concern.

            Finally, humans won't need to care about the "real world". The virtual world will be their "real world". And if you are referring to the biological species "homo sapiens", I don't see why it's important to preserve it. If you want to think of it this way, they will have evolved into a different, non-biological species. I don't see anything wrong with that. The chances are, human curiosity will take over, and many will still want to explore the depths of "real-world" space. Whether they will do it in their natural, biological form or some other form isn't, in my opinion, an important question.

            Comment Hidden ( show )