Meltdown is “probably one of the worst CPU bugs ever found” (1 Viewer)

  • Welcome to the Roundtable! If you have an account already, please sign in, otherwise feel free to register. Note that you will be unable to post or access some boards and information unless you sign in.

Laron

QHHT & Past Life Regression
Staff member
Administrator
Creator of transients.info & The Roundtable
Jul 19, 2016
7,454
15,606
Nelson, New Zealand
laron.nz
PC's, laptops, smartphones, tablets, you name it, virtually ever modern computer systems with an Intel, ARM and AMD CPU (central processing unit), has been found to have two major security flaws that could allows hackers to steal sensitive data, including banking info and passwords.

The researchers have shown that the flaw goes back as far as 1995, with the Intel processors.

The meltdown flaw could enable attacks to bypass the hardware barrier, beyong the apps ran by users, and the core memory. The fix comes with a change in the way the OS handles memory, and as a result, all machines that get the fix could slow down by up to 30%.

Spectre, the name of the secondary flaw, allows attacks to trick error-free apps into providing private information.

More information can be found over on the Guardian here, but it sure is interesting that such a major flaw took this long to become revealed, as our shift in consciousness continues on!

meltdown and spectre.jpg
 
OP
Laron

Laron

QHHT & Past Life Regression
Staff member
Administrator
Creator of transients.info & The Roundtable
Jul 19, 2016
7,454
15,606
Nelson, New Zealand
laron.nz
What really gets me is from a technical and programming direction (I learnt 5 programming languages in my early IT education), I can't seem to figure out why any fix would slow down all the processes by 30% unless something was running and doing something in the background constantly, and then would could that be?

Certainly, re-coding the way they process could slow them down, but why do that? Just fix the issue, easy. Something suspicious could be going on here on a global scale, or, it could be a really rare issue like they say, that needs an unusual fix.
 
  • Like
Reactions: Anaeika

Kevin C

Involved Wayfarer
Jul 27, 2016
576
1,194
SoCal
What really gets me is from a technical and programming direction (I learnt 5 programming languages in my early IT education), I can't seem to figure out why any fix would slow down all the processes by 30% unless something was running and doing something in the background constantly, and then would could that be?
The problem is it is a hardware "microcoding" issue, down to the local CPU area on the motherboard. What they are doing right now is a "soft" fix on a "hard" problem, which is only temporary once hackers figure out how to break the "soft" fix.

You have to rely on firewalls on your network to cut off "parasitic hacker" connections (this is what all the cloud platforms are doing - only problem is all it takes is one lousy customer with a stupid password to break in).

Basically, the hard-coded "branch prediction"/"speculative execution" schemes (due to the necessity of prediction schemes requiring "free flowing" information) are open-doors for hacking since any communication requires two way street - all you need to do is hook up a "speaker" to one line and you can get any information.
Keep in mind all data (especially your hard drive, and your USB drives) has to go through the CPU to interpret the microcodes and execute them.

Let me give you a simple example.
Assume you and your neighbor have two lines hanging between your houses, as you have agreed to freely exchange food and other stuff. To protect your privacy, you enclose the tubes with black latex cover.
(Yes, painfully obvious what the problem is, but it is basically the CPU design)
Now imagine you and your neighbor are now passing so much stuff that you need to figure out ahead of time how much to buy at the store. So what you do is you cut a hole to look at the flow of stuff, and given certain combinations, you predict what is needed.
Ex. if the request was "flour, egg, sugar" then you may predict next items will be "vanilla, chocolate, yeast, milk".
This is essentially what the branch prediction and speculative execution hard-wired circuits do in a nutshell.

What the researchers basically did was find a way to bypass the entire application stack and sneak a peak into that "hole" in the CPU to extract microcode flowing through the pipe (as example above, peak into that window). Apparently they were able to do so rather easily.

This practically affects all modern microprocessor designs, with the exception of certain ARM architectures simply because they are "hard driven, dumb" processors (i.e., gimme an instruction, I execute it, next). Basically just speed driven. No complicated future-prediction schemes.

Since this is hard-coded, it is very hard to cover it up without slowing it down dramatically.
Simply put, let's take our example above.
Now you have that window that you have been using, imagine if you find out someone's been stealing stuff.
Now you cover up that window. You are blind, so basically you go back to the origins where you just react when stuff arrives, except you are handicapped because all your previous experience and routines are based on being able to predict what is needed.

Same applies here. Depending on what you are trying to do (intensive computations will see dramatic slowdowns, etc.), you could see 0% difference, or you could see 30%+). The problem is this -
the entire computer architecture from DRAM to HD to the ICs near the CPU are configured to take advantage of the speculative and branch predictions methods. Basically the CPU uses internal and outside memory banks to "create instruction/execution blackboards" so they can pull them quickly once they are almost sure what the next instruction is. So, the only SW/BIOS workaround is to completely block these schemes.
Well, when you have an entire electronics industry built around this entire "instruction prediction" architecture, that means any SW workaround will slow the entire thing down significantly, not unlike covering up the window in the simplistic example. Now every block on your motherboard in your computer has to wait for the previous block to "execute" its instruction completely before proceeding.

So, the only reasonable solution is to.....buy new computers. The problem with this is, it takes at least 5 years to build an entirely new CPU architecture, build it, test it, mass-produce it, and build the supporting infrastructure for it (ICs, DRAMs, encoders, comm chips, LCD drivers, HD readers, etc.).

So, the SW is just a patch. The problem now is it will take time to completely secure it.
Oh, and it does not matter even if you have VPN or Linux, 256-512k encryption. As long as anyone can see your data-stream, any encryption can be broken with today's ML/AI algorithms and the high-speed processors.

Now you can see what a big headache the entire semiconductor industry has, as they have no good answers.
 

Snowmelt

Snowmelt
Staff member
RT Supporter
Board Moderator
Aug 15, 2016
5,325
13,885
Perth, Western Australia
So, really, it's like that moment when you "wake up" and realise the only way to drop the weight is to modify your diet for all time and get on the floor and do those exercises. Maybe not what you had in mind (heh, heh).

The way you explain it, Kevin C (which is very good for us grocery-minded shoppers) feels like one of those expected "revelations" that are going to break this year. Basically, something has been revealed, and there's no going back to not-knowing, anymore. Looking at the bigger application outside computing, it means that hidden, deceptive, or just plain secret short-cuts to bring win-win over someone else's lose-lose are no longer going to be any use in the new frequencies. All the steps of the dance must be taken, no short-cuts, no smoke and mirrors.
 
  • Like
Reactions: Anaeika

Sam Vause

Involved Wayfarer
RT Supporter
I'm gonna' step WAY out on a limb here, and first admit that I ran Intel's Server Validation Labs (the US domestic ones, not the ones in Mexico or Israel) for a while. Laron's analogy is reasonable, and this whole thing is so very foreign to anything I've ever seen in the spec sheets that it has to have been deliberate from the get-go, and then buried far, far deeply in the architecture. There's a CIA/NSA smell to this one, not a good one, and I'm willing to bet a lot of my former colleagues (I retired a year and a half ago) are pissed this was present in their products and got through their damned fine validation tests.

And my husband reminded me that this is present in several architectures from different vendors, pretty much ensuring there's some skullduggery afoot.....
 
Last edited:

Kevin C

Involved Wayfarer
Jul 27, 2016
576
1,194
SoCal
I'm gonna' step WAY out on a limb here, and first admit that I ran Intel's Server Validation Labs (the US domestic ones, not the ones in Mexico or Israel) for a while. Laron's analogy is reasonable, and this whole thing is so very foreign to anything I've ever seen in the spec sheets that it has to have been deliberate from the get-go, and then buried far, far deeply in the architecture. There's a CIA/NSA smell to this one, not a good one, and I'm willing to bet a lot of my former colleagues (I retired a year and a half ago) are pissed this was present in their products and got through their damned fine validation tests.

And my husband reminded me that this is present in several architectures from different vendors, pretty much ensuring there's some skullduggery afoot.....
Sam,
Keep in mind is this silicon level architecture, i.e., the chip fab division. Therefore, nothing above the BIOS and the application stack (which you and the validation team are) would be able to pick it up.
Given my college engineering education, in which I was able to take a class from a professor who specializes in microprocessor architecture (he was working on asynchronous systems, which is a CPU not clock-driven - hence no GHz specs), and you can tell the single-minded focus on speed vs. power (and as a consequence, heat). He had research projects running with all the big companies - Intel, Cisco, TI, Qualcomm, etc.
This was and still is the standard processor architecture that everyone followed because it was easy to implement, stack, augment, etc. Therefore, it covers all - Qualcomm, TI, Intel, Mediatek, Micron, Nvidia, AMD, ATi, etc.
So, on one hand, maybe there was a "mass brainwashing" in processor architecture at the top schools in the 60-2000s, which would lend credence to the CIA/NSA conspiracy, or all the research groups were so wired (Moore's Law) into the same architecture taught to them that they did not consider other issues (similar to the uranium vs. thorium nuclear reactor design issue). Ended up focusing on "building a better mousetrap" instead of considering other processor architectures.
One more thing - This issue may end up returning our computers/phones to 1-inch thick sizes. Due to "brute-force power" computation, we may need to add heat sinks since today's applications generate huge amounts of heat. This depends on how advances in silicon fab may be able to mitigate the issue.
Hailstones, I think this may be the catalyst for the next level - quantum computing. It is a completely new architecture in all hardware/software/design/paradigm (similar to our 3D to 4D/5D), that current processors pale in comparison. The question is whether it can be mass-produced at cheap enough prices, and whether the entire silicon-based industry is able to support it.
 

Sam Vause

Involved Wayfarer
RT Supporter
Sam,
Keep in mind is this silicon level architecture, i.e., the chip fab division. Therefore, nothing above the BIOS and the application stack (which you and the validation team are) would be able to pick it up. ....
I respect your thoughts. We did have access to - and modified - the RTL; it was part of our validation work to identify the core problem and the fix down to the RTL. Fixes were offtimes implemeted in micro-code (CPU and/or PCU), or also in BIOS, or if no other choice, in the RTL (preference was for the metal layer, but then there were times....). In a few cases, in all these places. Tis a messy, messy world....

And I truly think you nailed the Quantum Computing imperative, too :)
 

Kevin C

Involved Wayfarer
Jul 27, 2016
576
1,194
SoCal
I respect your thoughts. We did have access to - and modified - the RTL; it was part of our validation work to identify the core problem and the fix down to the RTL. Fixes were offtimes implemeted in micro-code (CPU and/or PCU), or also in BIOS, or if no other choice, in the RTL (preference was for the metal layer, but then there were times....). In a few cases, in all these places. Tis a messy, messy world....

And I truly think you nailed the Quantum Computing imperative, too :)
Hi Sam,
Ah, thanks for the correction! Apologies.... :)
But it is tough though - since all textbooks for microprocessor architecture all point to the same design! So, unless you were willing to protest the microprocessor "bible" - it would be nigh impossible to find any defects without risking your "future" career, lol.

Yes, either we have quantum computing, or we have something else entirely. In any case, the current silicon regime will have to be upended completely, otherwise we end up with all kinds of other trade-offs.
 

Users Who Are Viewing This Thread (Users: 0, Guests: 1)