Also not necessarily a bad thing.
But most importantly, not inevitable.
For those with interest in the topic but not enough to watch a video over half an hour long, basically the idea that a technological singularity is inevitable is broken down into several sections which are discussed in-depth and a conclusion reached in each case:
Part one: Computers are getting faster
Conclusion: True. This cannot continue forever because physics, but will continue into the foreseeable future.
Part two: The rate of improvement is accelerating (Moores Law holds now and will continue to hold)
Conclusion: Outright false.
Part three: Superhuman AI is possible
Conclusion: We can't be sure but most likely true.
Part four: Superhuman AI is capable of creating a superior version of itself.
Conclusion: Possibly true but far from guaranteed for a number of reasons.
Part five: The design of the second superhuman AI by the first will be faster than the design of the first by us.
Conclusion: Most likely false
Part six: The cycle continues to accelerate. The second AI designs the third faster than it was designed, the design of the fourth even faster still... etc.
Conclusion: Almost certainly false.
The video is not an attempt to prove that a technological singularity is impossible - in fact it most certainly is not impossible - but rather to shatter the common misconception that it is inevitable once a superhuman AI is made and to question the idea that it is necessarily a bad thing. Some interesting philosophical points are brought up in this regard, such as questioning whether a superhuman AI that is secretly malevolent would actually be hostile. There are a number of logical reasons that they might not attempt to harm humanity even if they wished to and were capable of doing so.