Large or “foundation” models are now being widely used to generate not just text and images but also video, music and code from prompts. Although this “generative AI” revolution is clearly driving new opportunities for innovation and creativity, it is also enabling easy and rapid dissemination of harmful speech and potentially infringing existing laws. Much attention has been paid recently to how we can draft bespoke legislation to control these risks and harms; however, private ordering by generative AI providers, via user contracts, licenses and privacy policies, has so far attracted less attention. Drawing on the extensive history of study of the terms and conditions (T&C) and privacy policies of social media companies, this paper reports the results of pilot empirical work conducted in January–March 2023, in which T&C were mapped across a representative sample of generative AI. With the focus on copyright and data protection, our early findings indicate the emergence of a “platformisation paradigm,” in which providers of generative AI attempt to position themselves as neutral intermediaries. This study concludes that new laws targeting “big tech” must be carefully reconsidered to avoid repeating past power imbalances between users and platforms.