Having it defined is very important. However part of the problem is that definition actually takes away guarantees that many CPUs (eg. x86) give you. For example double checked locking works in x86 but famously not in Java. So the fact is that Java is giving you in some ways a less sequential model than what you started with is what makes it particularly insidious.
You are free to rely on the guarantees made by a particular hardware architecture. The JVM implementation doesn't "take them away" -- the spec merely doesn't guarantee them. But if you do, then you might lose portability.
My impression is that by not guaranteeing them it does take them away, because the implementation is free to implement them in any way they choose including in the compiler. So it will not matter if your code is executing on a processor that does not reorder instructions, Javac may already have done it to the byte code.
Even a C compiler is allowed to reorder instructions. How is any compiler supposed to know if you prefer performance (and therefore reordering) or to to rely on a processor-specific feature? That's why any language that runs on more than one architecture needs to define a language-level memory model.
Having it defined is very important. However part of the problem is that definition actually takes away guarantees that many CPUs (eg. x86) give you. For example double checked locking works in x86 but famously not in Java. So the fact is that Java is giving you in some ways a less sequential model than what you started with is what makes it particularly insidious.