![]() So their NSFW generation capabilities would automatically suck, right? Is Stable Diffusion different? Well, from the company's own description, their model was trained on a subset of LAION-5B (a huge database of 5 billion captioned images). It's safe to assume that any startup company on the forefront of AI Vision, text-to-image generation will do their best to remove all NSFW from the training datasets, and even add NSFW filters in the software itself. Now you might be wondering, "bro I tried NSFW on Dall-E, I get horror-movie bodies that i would wish I never looked at, how is this different". A GPU with 8GB VRAM is recommended, but I run it fine on an old 6GB VRAM GPU with a slightly modified release that sacrifices generation time for lower VRAM usage - will be included in the guide below. This means that now anyone with a decent enough card can run it on their pc, 100% free, no trial/credits bullshit. Just 24 hours ago, they released their latest, most trained model yet publicly. ![]() The Stable Diffusion software has been publicly available from the beginning on their github repo Unfortunately, OpenAI (more like ClosedAI), censors Dall-E extremely hard, to the point where the only way to get NSFW content is to get super creative with your words to avoid all banned words, and even then, you'll need to have 50 throwaway email accounts and phone numbers ready to make new ones every time you get banned (which you will).īut I'm sure most of you have not heard that OpenAI now has some serious competition from an actually open source program: Stable Diffusion. One of the latest developments in AI Vision, a program smart enough to generate any image from any prompt. I'm sure most of you around here have heard about OpenAI's Dall-E already. 1) Some background info on why I think this is promising (if you just want the guide you can skip this part):
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |