Yet another generative tool without safe and fair use discussion

Google seems to have just revealed its latest text-to-video diffusion model, Google Lumiere, just as the debate over fake images and videos heats up, with the following note:

𝘚𝘰𝘤𝘪𝘦𝘵𝘢𝘭 𝘐𝘮𝘱𝘢𝘤𝘵
𝘖𝘶𝘳 𝘱𝘳𝘪𝘮𝘢𝘳𝘺 𝘨𝘰𝘢𝘭 𝘪𝘯 𝘵𝘩𝘪𝘴 𝘸𝘰𝘳𝘬 𝘪𝘴 𝘵𝘰 𝘦𝘯𝘢𝘣𝘭𝘦 𝘯𝘰𝘷𝘪𝘤𝘦 𝘶𝘴𝘦𝘳𝘴 𝘵𝘰 𝘨𝘦𝘯𝘦𝘳𝘢𝘵𝘦 𝘷𝘪𝘴𝘶𝘢𝘭 𝘤𝘰𝘯𝘵𝘦𝘯𝘵 𝘪𝘯 𝘢𝘯 𝘤𝘳𝘦𝘢𝘵𝘪𝘷𝘦 𝘢𝘯𝘥 𝘧𝘭𝘦𝘹𝘪𝘣𝘭𝘦 𝘸𝘢𝘺. 𝘏𝘰𝘸𝘦𝘷𝘦𝘳, 𝘵𝘩𝘦𝘳𝘦 𝘪𝘴 𝘢 𝘳𝘪𝘴𝘬 𝘰𝘧 𝘮𝘪𝘴𝘶𝘴𝘦 𝘧𝘰𝘳 𝘤𝘳𝘦𝘢𝘵𝘪𝘯𝘨 𝘧𝘢𝘬𝘦 𝘰𝘳 𝘩𝘢𝘳𝘮𝘧𝘶𝘭 𝘤𝘰𝘯𝘵𝘦𝘯𝘵 𝘸𝘪𝘵𝘩 𝘰𝘶𝘳 𝘵𝘦𝘤𝘩𝘯𝘰𝘭𝘰𝘨𝘺, 𝘢𝘯𝘥 𝘸𝘦 𝘣𝘦𝘭𝘪𝘦𝘷𝘦 𝘵𝘩𝘢𝘵 𝘪𝘵 𝘪𝘴 𝘤𝘳𝘶𝘤𝘪𝘢𝘭 𝘵𝘰 𝘥𝘦𝘷𝘦𝘭𝘰𝘱 𝘢𝘯𝘥 𝘢𝘱𝘱𝘭𝘺 𝘵𝘰𝘰𝘭𝘴 𝘧𝘰𝘳 𝘥𝘦𝘵𝘦𝘤𝘵𝘪𝘯𝘨 𝘣𝘪𝘢𝘴𝘦𝘴 𝘢𝘯𝘥 𝘮𝘢𝘭𝘪𝘤𝘪𝘰𝘶𝘴 𝘶𝘴𝘦 𝘤𝘢𝘴𝘦𝘴 𝘪𝘯 𝘰𝘳𝘥𝘦𝘳 𝘵𝘰 𝘦𝘯𝘴𝘶𝘳𝘦 𝘢 𝘴𝘢𝘧𝘦 𝘢𝘯𝘥 𝘧𝘢𝘪𝘳 𝘶𝘴𝘦.

This is the only paragraph in the paper on safe and fair use. The model output certainly looks impressive, but, without a concrete discussion of ideas and guardrails for safe and fair use, this reads like nothing more than a checkbox to avoid bad publicity from the likely consequences.

Source