Yet another generative tool without safe and fair use discussion
Google seems to have just revealed its latest text-to-video diffusion model, Google Lumiere, just as the debate over fake images and videos heats up, with the following note:
๐๐ฐ๐ค๐ช๐ฆ๐ต๐ข๐ญ ๐๐ฎ๐ฑ๐ข๐ค๐ต
๐๐ถ๐ณ ๐ฑ๐ณ๐ช๐ฎ๐ข๐ณ๐บ ๐จ๐ฐ๐ข๐ญ ๐ช๐ฏ ๐ต๐ฉ๐ช๐ด ๐ธ๐ฐ๐ณ๐ฌ ๐ช๐ด ๐ต๐ฐ ๐ฆ๐ฏ๐ข๐ฃ๐ญ๐ฆ ๐ฏ๐ฐ๐ท๐ช๐ค๐ฆ ๐ถ๐ด๐ฆ๐ณ๐ด ๐ต๐ฐ ๐จ๐ฆ๐ฏ๐ฆ๐ณ๐ข๐ต๐ฆ ๐ท๐ช๐ด๐ถ๐ข๐ญ ๐ค๐ฐ๐ฏ๐ต๐ฆ๐ฏ๐ต ๐ช๐ฏ ๐ข๐ฏ ๐ค๐ณ๐ฆ๐ข๐ต๐ช๐ท๐ฆ ๐ข๐ฏ๐ฅ ๐ง๐ญ๐ฆ๐น๐ช๐ฃ๐ญ๐ฆ ๐ธ๐ข๐บ. ๐๐ฐ๐ธ๐ฆ๐ท๐ฆ๐ณ, ๐ต๐ฉ๐ฆ๐ณ๐ฆ ๐ช๐ด ๐ข ๐ณ๐ช๐ด๐ฌ ๐ฐ๐ง ๐ฎ๐ช๐ด๐ถ๐ด๐ฆ ๐ง๐ฐ๐ณ ๐ค๐ณ๐ฆ๐ข๐ต๐ช๐ฏ๐จ ๐ง๐ข๐ฌ๐ฆ ๐ฐ๐ณ ๐ฉ๐ข๐ณ๐ฎ๐ง๐ถ๐ญ ๐ค๐ฐ๐ฏ๐ต๐ฆ๐ฏ๐ต ๐ธ๐ช๐ต๐ฉ ๐ฐ๐ถ๐ณ ๐ต๐ฆ๐ค๐ฉ๐ฏ๐ฐ๐ญ๐ฐ๐จ๐บ, ๐ข๐ฏ๐ฅ ๐ธ๐ฆ ๐ฃ๐ฆ๐ญ๐ช๐ฆ๐ท๐ฆ ๐ต๐ฉ๐ข๐ต ๐ช๐ต ๐ช๐ด ๐ค๐ณ๐ถ๐ค๐ช๐ข๐ญ ๐ต๐ฐ ๐ฅ๐ฆ๐ท๐ฆ๐ญ๐ฐ๐ฑ ๐ข๐ฏ๐ฅ ๐ข๐ฑ๐ฑ๐ญ๐บ ๐ต๐ฐ๐ฐ๐ญ๐ด ๐ง๐ฐ๐ณ ๐ฅ๐ฆ๐ต๐ฆ๐ค๐ต๐ช๐ฏ๐จ ๐ฃ๐ช๐ข๐ด๐ฆ๐ด ๐ข๐ฏ๐ฅ ๐ฎ๐ข๐ญ๐ช๐ค๐ช๐ฐ๐ถ๐ด ๐ถ๐ด๐ฆ ๐ค๐ข๐ด๐ฆ๐ด ๐ช๐ฏ ๐ฐ๐ณ๐ฅ๐ฆ๐ณ ๐ต๐ฐ ๐ฆ๐ฏ๐ด๐ถ๐ณ๐ฆ ๐ข ๐ด๐ข๐ง๐ฆ ๐ข๐ฏ๐ฅ ๐ง๐ข๐ช๐ณ ๐ถ๐ด๐ฆ.
This is the only paragraph in the paper on safe and fair use. The model output certainly looks impressive, but, without a concrete discussion of ideas and guardrails for safe and fair use, this reads like nothing more than a checkbox to avoid bad publicity from the likely consequences.