NotaDaedraWorshipper wrote: »BloodyStigmata wrote: »Those are going to haunt my nightmares.
Yup.
I don't think they're THAT bad - definitely not "guar" though!
phaneub17_ESO wrote: »Wonder if it can do all the Daedric Princes in one family portrait.





















i also somehow don't know i think she's trying to score points or is too impressed by elswer ...
ok i try somthin without elderscrolls:
houses with mushroom roofs/lake in foreground/hanging trees on the banks/sunrise, high details
new york:
berlin
dallas
Balmora city, morrowind, elderscrolls online
i think he likes spiky towers or houses
or you have to describe it differently so not only the place but what exactly only that puts one again before other problems that garbage comes out of it
I am currently looking around a bit and trying out how the AI thinks.
but i think he loves spiky thinks:
town with half-timbered houses, medieval
i think the AI means that city always contains something pointy so sman must work more with village
ok village the same but not so much
In a couple of years... I wonder if these AI could make Games on the go
So yeah... Gonna ruminate on this a bit. Gonna be one of those rather lengthy posts of mine. This happens to be a subject near to me you see...
Anyway...
In a couple of years... I wonder if these AI could make Games on the go
No. Not in the next couple of years. But... Eventually... Probably? Unless the world ends first or something.
Few weeks ago, I came across an interview from a group behind one of these bots, and they stated that their end goal was to create something like the holo-deck from Star Trek, just with VR goggles instead of that magical force field room thing.
So they are kinda working on that sort of thing, but it will require several other technologies to have some rather significant breakthroughs. The thing though is, that they do not have to work on these technologies in serial fashion. In fact, there are many teams of researcher around the world exploring all these required components to make that a reality.
So theoretically, I suppose, there could be sudden wave of major breakthroughs in all the associated fields, then add in couple of years for working on how to integrate it all together, and we could have something like that in 5 to 6 years.
That value, though, is quite unlikely, simply due to the amount of things we'd have to figure out first to make it work, and then make these things work together. And just think how much difficulties people are having in making our current games work without bugs and stuff. How game development takes half a decade or more at this point. I doubt that we will have an AI generated adventure machine anytime soon.
I do, however, fully believe we will, but how and when and in what form, and how much human "steering" it will require, is anyone's guess. So maybe in 10? 15? 25? Who knows. In 25 years things will have progressed so far from where we are now, that it is practically impossible to make any kind of prediction, simply because we have no idea on what will be invented, and when it will be invented.
We can, of course, extrapolate on what tech is currently available. Little reason to doubt those getting better, though it is, of course, always possible that we will hit some unforeseen bottleneck that will slow things down. But assuming that nothing like that happens, it is clear that in 25 years, graphics will be immeasurably better than they are now. We will turn paintings of renaissance master into vivid 3d world for us to explore and slay monsters in.
AI will be handling most of the tedious bits of asset generation and even level design. You just tell the AI what you want and it will spit out something pretty close to what you want, and then tweak it to your tastes and needs. That is simply extension of what we have now. And that is the bare minimum we will have. Personally I think we will have much more than that in 25 years.
Story engines can be coded to create more or less compelling narratives based on string of inputs, and I expect more AI based functions to become their own independent modules - Kinda like Havoc is the go-to for physics in games. In the not too distant future we might get a relationship modulator, an economy calculator, and weather system generators as independent modules that game devs will just plug into the whole, set in some in world base values and let the thing do it's magic on its own.
It just depend on which fields the major breakthroughs will take place, and at what order and pace they will occur.
If you are interested in taking a peek into the future, there is this this YouTube channel I occasionally watch to keep tabs on what's new and such. It's called Two Minute Papers. I heartily recommend it to all interested in learning where the cutting edge is at the moment.
And if this whole AI art thing has taken you by surprise, then most of the things detailed on that channel will probably feel like magic to you...
Speaking of AI art...
Well, it's not making artists obsolete just yet, or even any time soon. But it will change the workflow and the market substantially.
Back in 2002, after having grown disillusioned with the prospects of academic life of a historian as a career path, I decided to move into the creative field and learned how to draw and paint.
At that time, digital art was still in it's infancy, but it was immediately evident to me that it was going to be the way to go. That it was going to be the way of the future.
Digital painting techniques do not make you any better at art, or make your works any more impactful, or make it "easier." They do, however, make the whole process just so much more efficient, and come with lot less hassle (no need to wash your brushes ever again), no need for huge work spaces (Heck, you could work in a coffee house while sipping mocha lattes on the other side of the globe from you employer and be cranking art just fine.), and offer so many new and interesting possibilities that it was clear it was going to take over the graphic arts industry.
And it pretty much has done just that.
Now in 2022, with the maturation of these art bots, I can see the same sort of vibe in the air. These things are going to revolutionize how we create and consume art and create images. And in a few years from now, do the same to animation and 3d modeling.
The technology they provide is so much faster and easier and efficient, and enables so many new exciting possibilities, that it is really rather impossible to ignore.
However, the thing with AI art is, that doesn't really produce exactly what you are looking for. And the options for steering it towards your intentions are still limited, though that is set to improve as we work out exactly how to best utilize this tech.
At this time, most of these bots just generate random dreamlike images that are formed as hodgepodge of all the accumulated crud that is floating about the internet. Some are just wrong, some are weird and most are just sort of hinting at something actual being there, but it's just noise and random shapes. The image you think you are seeing is actually something your brain interprets from the chaos it sees. It isn't in the painting itself as such. This of course varies lot by the subject matter and style, and certain things, like everyday objects and simple ideas fare better than expressive and interpretable abstract notions.
There are some mechanics on some of the bots, for feeding seed images, and creating some sort of baseline from which you want the AI to iterate, but those options are limited and kinda annoying to use. UI design is not really a priority to the researches behind these algorithms at this point.
You can sort of do it right now with famous images and widely recognized artists, and universally understood concepts. Like you can totally ask for a "Frank Frazetta style Mona Lisa riding a cyber bike on the moon" and expect to get something that will actually look like what you asked for.
Inputting stuff that is... Well less defined, leads to more random and odd directions. Things like "Dark, dreamy wood, full of peril and mystery" can give you cool images, or weird blur of conflicting motifs.
A good example, from the images on this thread, is how these bots struggle with something like Khajiit. We all know what a Khajiit looks like, we can imagine them doing all sort of things, but the AI has no clue as to what a Khajiit is. As smart as it is, it is not a sentient being, it does not understand the meaning of any of these inputs, much less the connotations and nuances such images contain. It just processes them from a mathematical basis and spits out stuff that equate valid in it's math, according to evaluation of what we humans have posted online.
Once we get better options to "teach" the AI stuff - Like this is a Khajiit, and input 100 or so Khajiit images, and maybe set seeder pics for stuff like "Khajiit Anatomy, Khajiit Facial Features, Khajiit proportions. Khajiit Expressions. Khajiit Cultural Garb." then the AI can start randomizing them into doing stuff like "Khajiit surfboarding on the astral waves off the shores of Sands Behind the Stars" and expect results that look like what we thought they would. Or maybe all that will be replaced by using digital mannequins, like those wooden dolls used by traditional artists, except these can define make-believe creatures such as Khajiit. You just model your fantasy race 3d dolly, and then tell the AI to use that as baseline instead of human anatomy for figures in the "painting." You could do the same with expressions and what not. Just looking at the improvements seen in Microsoft's latest iteration of Virtual Humans and their expression tracking, makes me wonder of all the possibilities they'd enable if you could incorporate them to AI art process.
This also explains why lot of those Solitude images "failed"and are not really "Solitude" images. They are instead "solitude" images, as interpreted by the algorithm based on images tagged with solitude monikers. But we know that Solitude is a city, on the northern shore of Tamriel, and that it is characterized by a distinct landmark called the Great Arch. The AI doesn't. Unless you specifically teach it that fact. It can't go on it's own and learn such things. Well, I mean, I guess it could, but its highly unlikely. and it's lot more plausible that it will "learn" something completely different from the various references to the "City of Solitude" available on the internet. Regardless of what some people seem to believe, AI simple aren't yet conscious on such a level as to be able to do such nuanced thinking.
For now, and for the foreseeable future, we are stuck at teaching these bots ourselves. If we want them to understand specific made up or abstract notions, we need to teach it to them. And that at this time, that is not really the focus of the various bots made available for the wider audience.
I give it two, maybe three years, and by then we should have that ability, as well as table top home solutions for generating private images. We will have our own personal AI that we can teach to do images in a particular way. We can, in sense, create our own AI with a distinct style and start churning images for anything we can think of. And we can do it in a cohesive distinct style unique to our own bot.
Cultivating your own AI with a distinct style and flair might become part of being a visual artist in the future. And maybe such educated bots will stop making spiky cities. I mean, what else would you expect them to do right now?
Pretty much all modern cities are pretty darn spiky. So if you ask for a city, or even villages... Well, most of the source images it is gonna work through sport pretty spiky landscapes. Not that it understands any of it, it doesn't even understand what a a building is, let alone what a city is. It just recognizes that it is a term associated with a certain pool of resource data, and that most of those images have strong vertically aligned geometry in them. So that is what it churns out. It's mathematically valid interpretation of the prompt as based on the available source data.
So at least for now, artists will remain a thing, though the amount of available gigs is going to shrink as AI tech matures.
AI can generate good enough, close enough, almost like something actual, way too easy to bother paying an artist to spend a week on working something you can generate in minutes with AI.
You will still need a human mind behind it to generate the prompts, to feed it seeder images and to define things that are abstract or made up in nature (Like Khajits) and to steer the whole process towards something usable and something with meaning behind it.
I've taught digital art and painting on and off for 10 years now, and my advice on anyone interested about it as a career, is to go AI art now, learn to use it, learn to work with it. Incorporate it into your workflow, and use it to your advantage. 'Cause if you wont, there are others who will, and you wont be able to compete with them on the sheer economic efficiency it provides.
Now as final note, about a week or so ago, I had a discussion about this thing, and noted that in a few years, we will have AI art apps on our phones, and we will be sending our "creations" to our friends as part of everyday socializing, and that it will be as normal as tweets are these days. AI art equalizes the creative process, and it will let people feel that creative buzz even without spending inordinate time learning on how to draw and paint.
Those things will, of course, retain their place and value, but their role will change as time passes. Movies didn't kill books, and AI art wont kill classical arts, but it will provide people with an easy and exciting avenue to create things and explore their own imagination. As such, I see it all as a net positive.
I guess this thread is a bit of a precursor of that. People thinking of cool things, feeding them into the AI tinkering and refining the results until the system spits out something that actually catches our attention and makes us go "whoah".
I bet there are also plenty of tabletop GMs churning out visual aids for their campaigns with the help of AI. If I was still actively running an RPG campaign, I know I for sure would.
Who knows where this will all lead, especially once you start interacting with other emerging tech. Like how about posting your selfies with AI generated Augmented reality filters. Visit famous places and turn each snapshot of your life into a Vincent van Gogh rendition of a steampunk world with orcs and elves? I can see that as totally plausible with AI art and couple of other things looming on the horizon.
So, to end this, I think I'll just quote one of the most distinct catch phrases of doctor Károly Zsolnai-Fehér, the man behind Two Minute Papers channel: "What a time to be alive!"







Hey @everyone we're going to try doing a --test of a new image making system.
This test will last 24 to 48 hours (depending on user behavior and moderator feedback).
This test is an effort to unify aesthetics and coherence into a single system
There are two modes:
1) A general purpose artistic mode you can use by typing --test
2) A photo-realism mode you can use by typing --testp
Both settings can be toggled via the /settings panel
We are still trying to figure out how creative the system should be.
- If you want it to be more creative type --creative after your prompt
- So far our guides/mods preferred less creativity for photos and there was a even split for general-purpose mode
(personally, I love --creative, but it's a bit more chaotic for sure)
We've enabled --stylization to control the 'vibes' from a min of --stylize 1250 to a max of --stylize 5000 (default is 2500)
Known limitations:
- This test does not support multi-prompts or image-prompts
- The maximum aspect ratios supported is 3:2 or 2:3
- Each command makes two square images or one for non-square
- Words near the front of the prompt may matter more than words near the back
- The system may sometimes lock onto nouns more than adjectives
Please expect us to regularly change the model for the next few weeks. We're planning for more major changes.
old:INTRODUCING THE REMASTER FEATURE
- We want to try letting people 'remaster' images from the v1/v2/v3 algorithm using the 'coherence' of the --test algorithm.
- Underneath all new upscaled jobs (not made with test/testp) you'll see a 'remaster' button
- You can remaster any previous upscale job you've made in the past with /show jobID (you can get the jobID for any by going to the gallery and clicking 'copy jobid' under the image options)
Limitations
- Only works on wide aspect ratios up to 2:3 or 3:2
- Multi-prompts might work a bit funny and no image prompt support yet sorry!
This is kind of a crazy and experimental feature, we don't know if we will keep it around as-is but from our testing so far we're finding it really fun.





















How do you use that midjourney ai app.
I saw an awesome fallout images said to be created by that said program.
How do you use that midjourney ai app.
I saw an awesome fallout images said to be created by that said program.
@jtm1018 if you have Discord, you're halfway there, here is a Midjourney Quick Start Guide




























