I’ve been re-watching star trek voyager recently, and I’ve heard when filming, they didn’t clear the wide angle of filming equipment, so it’s not as simple as just going back to the original film. With the advancement of AI, is it only a matter of time until older programs like this are released with more updated formats?

And if yes, do you think AI could also upgrade to 4K. So theoretically you could change a SD 4:3 program and make it 4k 16:9.

I’d imagine it would be easier for the early episodes of Futurama for example due to it being a cartoon and therefore less detailed.

    • Nerd02
      link
      fedilink
      138 months ago

      Holy cow that is beyond impressive. Sure enough, sometimes it does hallucinate a bit, but it’s already quite wild. Can’t help but wonder where we’ll be in the next 5-10 years.

      • Tar_Alcaran
        link
        fedilink
        128 months ago

        Eh, doing this on cherrypicked stationary scenes and then cherrypicking the results isn’t that impressive. I’ll be REALLY impressed when AI can extrapolate someone walking into frame.

    • @nul@programming.dev
      link
      fedilink
      6
      edit-2
      8 months ago

      The video seems a bit misleading in this context. It looks fine for what it is, but I don’t think they have accomplished what OP is describing. They’ve cherrypicked some still shots, used AI to add to the top and bottom of individual frames, and then gave the shot a slight zoom to create the illusion of motion.

      I don’t think the person who made the content was trying to be disingenuous, just pointing out that we’re still a long ways from convincingly filling in missing data like this for videos where the AI has to understand things like camera moves and object permanence. Still cool, though.

      • @Crul@lemm.ee
        link
        fedilink
        48 months ago

        Great points. I agree.

        A proper working implementation for the general case is still far ahead and it would be much complex than this experiment. Not only it will need the usual frame-to-frame temporal coherence, but it will probably need to take into account info from potentially any frame in the whole video in order to be consistent with different camera angles of the same place.

      • @Honytawk@lemmy.zip
        link
        fedilink
        18 months ago

        It is the first iteration of this technology, things will only improve the more we use it.

        That it can do still images is already infinitely more impressive than not being able to do it at all.

        • zeus ⁧ ⁧ 𓆩⚡︎︎𓆪
          link
          fedilink
          28 months ago

          that’s weird. it’s actually a pretty useful feature, but it’s odd they’d add it to old reddit before new reddit, considering it’s basically deprecated. maybe it’s just an a/b rollout and i don’t have it yet

          i have old.reddit as default as well, but i’m not logged in on my phone browser and it wouldn’t open in my app

          • @Crul@lemm.ee
            link
            fedilink
            28 months ago

            that’s weird. it’s actually a pretty useful feature, but it’s odd they’d add it to old reddit before new reddit, considering it’s basically deprecated. maybe it’s just an a/b rollout and i don’t have it yet

            Sorry, I think I didn’t explain my self correctly. That feature it’s a very old one, it has been on old reddit since I remember. It has also worked on new reddit at some point, see the screenshot below from a comment I posted 6 months ago:

            "View discussions in X other communities" feature in new reddit

            :

            In old reddit it's accessible from the "other discussions" tab

            • zeus ⁧ ⁧ 𓆩⚡︎︎𓆪
              link
              fedilink
              28 months ago

              how the hell did i use reddit for almost a decade and not know about that feature

              it wasn’t your poor explanation, it was just me being an idiot i think - i just assumed it was new