"HammerheadFistpunch" (hammerheadfistpunch)
02/19/2020 at 11:47 • Filed to: Nerd corner | 3 | 6 |
The big news with neural learning video is the fear of deep fakes, and while thats totally a legit fear the thing Im looking forward to most is advanced frame creation. There is a new tool called DAIN that is producing some CRAZY good upframing results.
Note: this is a real explosion with real loss of life. heads up.
I mean...this is going to be really cool.
JawzX2, Boost Addict. 1.6t, 2.7tt, 4.2t
> HammerheadFistpunch
02/19/2020 at 11:56 | 1 |
that’ s some pretty impressive interpolation.
facw
> HammerheadFistpunch
02/19/2020 at 12:06 | 3 |
I’m too worried about the neural nets. Seems like we just need to train them to identify fakes as well as create them, which seems doable. And there are a lot of very cool applications.
BaconSandwich is tasty.
> HammerheadFistpunch
02/19/2020 at 14:15 | 1 |
That estimated depth map thing is neat.
Future next gen S2000 owner
> HammerheadFistpunch
02/19/2020 at 14:54 | 0 |
Witchcraft.
Bryan doesn't drive a 1M
> HammerheadFistpunch
02/19/2020 at 18:56 | 0 |
On the video game side, there’s DLSS, which does something similar to render a game at a lower resolution and then upscale it. I've found the results to be really nice.
tpw_rules
> facw
02/19/2020 at 20:38 | 2 |
The funny thing is that’s how they train the networks in the first place. Drastically s
implified
, t
hey first
train a network to generate real-looking images, then they train a second network to determine if the first network’s images are real or not. The second network gets better and better at determining if an
image is real or fake, thus forcing the
first network to get
better and better at generating fake images that
the second network thinks is real. It’s a “generative adversarial network”.
Regardless, my great concern with neural networks is that in many cases the accuracy almost does not matter. They’re
just a convenient
way for the humans involved to say “sorry, the computer says you don’t deserve a job
/a loan/health insurance
” and avoid having to be responsible for their decisions
.
A fun and non-controversial
example here:
Microsoft’s AI suddenly
decides a particular program is a virus. Despite Microsoft themselves correctly concluding that it is not a virus and whitelisting it
, they can’t/won’t actually change their AI to fix it and stop deciding
that
future versions are also a virus. “Sorry, that’s what the computer says.”