That got me wondering if "you convert to hiragana" is a solved task, or a research team and five years[0], and Google showed me an article[1] that gave me a facepalm, quoting from Google Translate(square brackets are mine):
> - As a result,
> - When the string "明日["tomorrow"]" is entered into TTS, the TTS model [・皿・] outputs an ambiguous pronunciation that sounds like a mix of "asu" and "ashita" (something like "[asyeta]").
> From this, we found that by using the proposed method, it is possible to obtain data from private data in which the consistency between speech, graphemes, and phonemes is almost certainly maintained for more than 80% of the total.
> Another possible cause is a mismatch between the domain of the training data's audio (all [in read-aloud tones]) and the inference domain.
My resultant rambling follows:
1. Sounds like general state of Japanese speech dataset is a mess
1.1. they don't maintain great useful correspondence between symbols to audio
1.2. they tend to contain too much of "transatlantic" voices and less casual speeches
2. Japanese speakers generally don't denote pronunciations for text
2.1. therefore web crawls might not contain enough information as to how they're actually pronounced
2.2. (potentially) there could be some texts that don't map to pronunciations
2.3. (potentially) maybe Japanese spoken and literal languages are still a bit divergent from each others
3. The situation for Chinese/Sinitic languages are likely __nowhere__ near as absurd, and so Chinese STT/TTS might not be well equipped to deal with this mess
4. This feels like much deeper mess than what commonly observed "a cloud in a sky" Japanese TTS problems such as obvious basic alignment errors(e.g. pronouncing "potatoes" as "tato chi")
Wait, are you saying that American homes did not have the regular camera doorbells before Ring happened? Those things predate LCDs. Earliest implementations date back to at least mid 1980s.
IIUC, that's what sci-fi LED panels of really old computers were. They showed all the internal statuses of the CPU as well as CPU-RAM bus. And the toggle switches allowed individual bit overrides.
The operator sets a CPU RESET switch to RESET, then powers on the machine, and start toggling RAM address and data switches, like HHLL HLLH HHHL LLLL. The operator then press and release the STEP push switch. The address 0b 1100 1001 is now set to 0b 1110 0000. This is repeated until the desired program or a bootloader is all complete. The operator finally sets CPU RESET to Normal, and CLOCK dial to RUN.
The CPU exits reset state, initializes program counter with reset vector, e.g. 0b1000, and start executing instruction at PC++. 1000, 1001, 1010, so on. Then oh no, the EXCEPTION indicator comes on, the LED shows 0b 1110 0000. That's divide r0 by 0, etc.
They didn't actually spend every half a day toggling those switches. They loaded their equivalents of bare minimum BIOS recovery code, then the rest wad loaded from magnetic or mechanical tapes. Only when computers were booted up blank slate or crashed and in need of debugging, the users resorted to that interface.
If they had the CPU-RAM main bus split into ROM and RAM address ranges in such ways that setting address to reset vector will yield the first byte of a BIOS program lithographically etched into the ROM chip, then simply powering on the machine will do the same thing as loading the BIOS manually.
There were also things like magnetic core memories. They didn't require lithography to fabricate, and there were both ROM and RAM kinds of those.
Of course, if you have sufficiently simple input devices that could do DMA, then you can do something e.g. IBM 1401 did:
When the LOAD button on the 1402 Card Read-Punch is pressed, a
card is read into memory locations 001–080, a word mark is set
in location 001 to indicate that it is an executable instruction,
the word marks in locations 002-080 (if any) are cleared, and
execution starts with the instruction at location 001. [...] To
read subsequent cards, an explicit Read command (opcode 1) must
be executed as the last instruction on every card to get the new
card's contents into locations 001–080.
I imagine the additional wiring on that LOAD button must have been pretty small: the READ functionality already exists in the 1402 device, it also has an output signal that tells when the read is finished (so the 1401 Processing Unit knows when the Read command is done), so you just need to tie that signal into resetting the PC to 1 and then starting the clock.
Why can't we just dump massive currents into spring returning solenoids with ~5mm or ~1/4" range of motion, and amplify that motion through tendon systems for whole joint motion ranges?
Heat goes up with the square of current, so putting 10x the current to get 10x the force means 100x the heat.
Still, I think this idea is under-explored. There are probably applications for robots that move really fast, but only for a second before having to cool down.
There is no free lunch: using mechanical amplification results in the same problem as described above, and spring return means that your actuator has to push that much harder to overcome the spring.
Net is that a couple of figures of merit limit the performance of electromagnetic systems, and being sneaky won't let you exceed them.
Another way to look at it, is that bldc motors are sort of like solenoids with the motion between between poles of the motor. There's a limit in both cases to what you can do and how much flux you can push
TLDR; is that you need high current, meaning a lot of ohmic heating. With non-negligible back-EMF resulting in even more losses. Rotating motors essentially "lengthen" the travel of the "plunger" compared to linear motors.
The theme is guidance systems. Especially guidance computers. That's the big big big no-no. I'm surprised this still hasn't been taken down and house flashbanged and all.
Those low-cost drones are just a fad. Fiber optic TV guided exploding thing is literally the oldest kind of anti-tank missiles. Russian winged cruise missiles are even older, early cold war kinds of stuff. It just so happens that none of Ukraine, Russia, Iran etc has air dominance nor proper war time production capacities, and so they must resort to substituting military equipment with remixes of AliExpress stuffs.
Just in the invasion in Iran we all saw Apache handling drones with ease. They can probably put on the minigun or even microgun on an MQ-9, which is a drone, but not like the ones discussed here. Or someone might realize a turret on a Super Tucano is cheaper than the Reaper ground control trailer. My point is, Ukraine and Russia throwing drones at each others is not a sure sign that that's the war of the future.
The war in Iran proves the opposite: It is actually the future. The US could easily establish air dominance over Iran, yet it can't stop their military from launching smaller drones both in the air and at sea. The Strait of Hormuz is effectively closed and air power alone seems unlikely to fix the situation. If you want to effectively eliminate an opponent nowadays you need an army of drones - the economics don't work out if you are only fielding expensive ships, planes and missiles. And regarding your point that an Apache can easily shoot down a drone: Roughly 9/10 drones in the Russia Ukraine frontline get shot down and the remaining 10% make up for about 80% of the casualties (rest being mostly artillery and mines).
Autonomous flight is significantly easier than autonomous driving. You just fly between points in space, and there's nothing but air inbetween. The ground control handles most of collision avoidance, and if that's not available, it's easily achieved by moving 300ft/100m up or down.
There's a significant amount of Japanese loanwords in modern Korean due to Japanese annexation(1910-1945/1965), as well as in modern Chinese to much lesser extent.
These aren't an indication of a shared vocabulary or ancestry, just loanwords for concepts that were novel and scientific by victorian standards.
reply