🤯 Apple just dropped the new ImageNet.
I'm not exaggerating. This is HUGE.
Apple just released Pico-Banana-400K. A 400,000-image dataset that changes everything for image editing AI.
Everyone talked about reasoning. Apple just quietly launched the foundation for next-gen visual AI.
The wild part? It’s all real photos. No synthetic garbage.
They used their Nano-Banana model for the edits. Then Gemini 2.5 Pro scored every single image. Only the top-tier quality made it in.
I remember the struggle to get models to just swap a background.
This data is different. It includes:
✅ 72K multi-turn sequences. For complex editing chains.
✅ 56K preference pairs. For training better, aligned models.
It means AI won't just learn what to edit. It learns how to edit well. Intuitive, Photoshop-level results are coming. 🚀
The best part? It's completely open-source under Apple’s research license: 🔗 github. com/apple/pico-banana-400k.
Apple just gave every lab the data foundation they need. Go download it and start building!
What’s the first complex editing model you’re going to train with this data? 👇