Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That makes sense, but forcing the user to change modes is bad. If I'm using a touchscreen, I often can't make noises; if I'm using voice, I often can't touch the screen.


>forcing the user to change modes is bad

Congratulations, you named the very reason this technique is effective. I am not being facetious here.

The whole idea is to make the user perform disruptive, unfitting actions -both in the mind and physically- to improve chances of catching errors.

First, to both make the mind "shift gears", or perform a "context switch", with all the related cache-flushing and prediction-discarding and all that.

Second, to activate brain regions that were hitherto suspended; the ones associated with other senses and skills, in particular with vocal skills. Fleshing out abstract concepts into concrete words helps catch mistakes, just like putting abstract feature requests into concrete diagrams or code helps with catching errors or inconsistent expectations & assumptions.


Fair enough; I missed that mikro2nd mentioned safety-critical systems. In that case, sure, let's adapt the whole environment to allow for those mode changes.

I was thinking more about systems where no lives are at stake, and there I think there's no need to change modes, one can be DISRUPTIVE ENOUGH by using the same IO channels, which are quite flexible already.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: