This week on YouTube, I compared four of today’s leading generative AI image models: Midjourney, Magnific, ChatGPT, and Adobe’s Firefly.
This video uses identical prompts to discuss each model’s unique strengths, limitations, and surprising outcomes in crafting compelling images from scratch.
It’s fascinating to see how each AI interprets prompts differently, sometimes leading to stunningly creative results—and other times, to images that missed the mark in surprising ways… spoiler alert… Adobe’s Firefly model (even with the new PS 2025 update).
Thanks for this AI video Blake.
Interesting. I’m definitely one who falls into the keep your enemies closer camp, although the upscaling in Magnific is quite impressive. I would not have enough use of its services, however, to pay their subscription prices. Every now and then I could use an upscaler. I have Topaz AI – it’s okay but definitely not the quality of Magnific. Maybe one day Adobe will improve Firefly – let’s hope!
Great video Blake! I’ve been using Midjourney for quite a while. I didn’t realize Magnific was now doing generation. It is a bit expensive for a hobbyist but I have to agree about the quality. Thanks for doing a great comparison and addressing the poor quality we see typically in Photoshop generation.
It is pricey. I pay for ChatGPt, Midjourney, and Magnific. It’s about $1300 a year for all three. However, I’m a solopreneur who would pay people a LOT more than that for even just a little help.
I would say it’s expensive for a hobbyist, but it depends on the hobby 😉 If compositing assets are something you need,m there is nothing like it. But, for the average person who spends more time making photos than compositing its probably not worth the cost. I use it to upscale my educational materials so you have decent images to work on that don’t “feel like AI”.
I had a vey quick try to play with Magnific.
Not very user friendly.
As you say expensive.
I wanted to try to upscale a photo – no joy even signing into to the free trial!
I get good (for me) results with Bing
All of them have their quirks. I can assure you, its better than Bing 🙂 But their interface is not click and go, you have to click on the question marks and gain understanding of the models they use for generation and upscaling. After a year with Magnific I can tell you with 100% certainty, there is no other program out there like it and it is the best at what it does.
Great Video =- Thanks!. I ran your prompts through Grok and got results comparable to mid journey for the most part. Given that they started years behind everyone else and have only been generating images for a few months, that are improving at a remarkable rate. It requires a subscription to X at $7 per month. I suggest you include Grok with the others in your next analysis.
Thanks! I appreciate it. I’m getting an influx of “include this next time…”
I use what I know and will more than likely do the same for the next time allowing those that want to try their favorite generator with the prompts I provide. Then they can assess theirs with the ones I use as a basis for judging criteria. I simply cannot do this for every generator it’d be an all day event for one photo 🤣
Thanks, Blake! Excellent demonstration comparing these models and their generated results from the same assignments.
Really informative.
Would love if you would do this again, in one year, as an anniversary, with the same assignment. Please, … ?
That sounds like a great idea!
In your hands, it will be great! I will be looking forward to seeing how it all develops.
Because I was so impressed with your demo, as an aside, … had an interesting conversation about Ai models with an outsider (someone not in visual arts, photography, etc.,). They wanted to know how ‘it’ learned and how ‘it’ understood our descriptions and how it works so fast. I wondered if a lot of it comes from our use of ‘tags’ as descriptors. Not that it matters, really.
Thanks for the informative video Blake. As always, you nailed it! 🤣🤣🤣
hahahahah I see what you did there.
I have been using Midjourney as a hobby for about a year, and experimenting with abstract concepts and getting very interesting results.
Like: “Eternity”, just one word. That is my prompt.
Well done and again as always informative/educational!
Blake, Thank you for this video! At 76 the only AI I have used is DALL-E3, but I have now generated over 12,000 images for experimental purposes. These are all shown as albums on my Facebook page. DALL-E-3 has been improving and now have at least 3 formats: square, portrait and landscape. I have been experimenting with 360 VR generation, and I complete the edges in Photoshop, upscaling it first in Topaz Photo AI. This is not as detailed as Magnific AI, so I should look into this. Also, I have enjoyed Immersity AI for 2D to 3D rendering. I do have a VR headset, so I am curious at how both the 360 VR and the 3D look on the Meta Quest. Have you looked at any of my images with your headset? I have been told by viewers on the 3D AI Stereoscopic FB site that they looked good, but I do not know for sure. A lot to talk about here!