Enhancing Efficiency and Creativity in Interior Design Through Diffusion Models
DOI:
https://doi.org/10.5281/zenodo.17222911Keywords:
Interior Design, Diffusion Models, Generative Design Technologies, MidJourney, Text-to-Image GenerationAbstract
Interior design often faces challenges related to time-consuming processes and limited creative flexibility. Recently, text-based artificial intelligence models have introduced new possibilities for faster, more diverse, and aesthetically rich design generation. This study investigates the performance of diffusion based models, specifically MidJourney, ChatGPT DALL·E, and FLUX.1 Kontext, in interior design visualization and compares them with a traditional rendering tool. Using identical textual prompts, images were generated across all platforms and evaluated based on aesthetic coherence, compositional quality, and alignment with the input descriptions. The findings highlight MidJourney as the most effective tool for transforming conceptual design ideas into visually compelling outputs. It demonstrated clear strengths in capturing stylistic nuance, atmospheric consistency, and visual appeal, making it especially valuable in early stage design workflows such as ideation and client presentations. The results indicate that text to image generation technologies like MidJourney can serve as powerful tools that enhance creativity and streamline communication in interior design. This study contributes to the growing body of research on AI assisted design by demonstrating how generative models can support innovation and efficiency in visual representation practices.
References
Abrusci, L., Dabaghi, K., D’Urso, S., Sciarrone, F., 2023. AI4Architect: An intelligent help system to support students in the design domain. In International Conference in Methodologies and intelligent Systems for Techhnology Enhanced Learning. Cham: Springer Nature Switzerland, pp. 65-72.
Ayten, E., Wang, S., Snoep, H., 2024. Surrealistic-like Image Generation with Vision-Language Models. arXiv preprint arXiv:2412.14366.
Bao, Z., Laovisutthichai, V., Tan, T., Wang, Q., Lu, W., 2022. Design for manufacture and assembly (DfMA) enablers for offsite interior design and construction. Building Research & Information, 50: 325–338.
Chen, J., Shao, Z., Hu, B., 2023. Generating interior design from text: A new diffusion model-based method for efficient creative design. Buildings, 13(7): 1861.
Cheng, S.I., Chen, Y.J., Chiu, W.C., Tseng, H.Y., Lee, H.Y., 2023. Adaptively-realistic image generation from stroke and sketch with diffusion model. In IEEE Winter Conference on Applications of Computer Vision.
Choi, J., Kim, S., Jeong, Y., Gwon, Y., Yoon, S., 2021. ILVR: Conditioning Method for Denoising Diffusion Probabilistic Models. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 11–17 October, pp. 14347–14356.
Croitoru, F.A., Hondru, V., Ionescu, R.T., Shah, M., 2023. Diffusion models in vision: A survey. IEEE Trans. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–20.
Delgado, J.M.D., Oyedele, L., Ajayi, A., Akanbi, L., Akinade, O., Bilal, M., Owolabi, H., 2019. Robotics and automated systems in construction: Understanding industry-specific challenges for adoption. Journal of Building Engineering, 26: 100868.
Dhariwal, P., Nichol, A., 2021. Diffusion models beat gans on image synthesis. In: M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, J.W. Vaughan (Eds), Advances in Neural Information Processing Systems, Red Hook, NY, USA, No: 34, pp. 8780–8794.
Dickstein, J., Weiss, E., Maheswaranathan, N., Ganguli, S., 2015. Deep unsupervised learning using nonequilibrium thermodynamics. Proceedings of the 32nd International Conference on Machine Learning, 6–11 July, Lille, France, pp. 2256–2265.
Gafni, O., Polyak, A., Ashual, O., Sheynin, S., Parikh, D., Taigman, Y., 2022. Make-a-scene: Scene-based text-to-image generation with human priors. Proceedings of the Computer Vision–ECCV 2022: 17th European Conference, 23–27 October, Tel Aviv, Israel, pp. 89–106.
Gal, R., Alaluf, Y., Atzmon, Y., Patashnik, O., Bermano, A.H., Chechik, G., Cohen-Or, D., An image is worth one word: Personalizing text-to-image generation using textual inversion. arXiv 2022, arXiv:2208.01618.
Ho, J., Salimans, T., 2021. Classifier-Free Diffusion Guidance. arXiv 2021, arXiv:2207.12598.
Jahanian, A., Chai, L., Isola, P., 2020. On the “steerability” of generative adversarial networks.
Labs, B.F., Batifol, S., Blattmann, A., Boesel, F., Consul, S., Diagne, C., Smith, L., 2025. FLUX. 1 Kontext: Flow Matching for In-Context Image Generation and Editing in Latent Space. arXiv preprint arXiv:2506.15742.
Lesmana, V.A.A., Tina, A., Yanti, S.R., 2024. Optimizing AI's role in advancing interior design industry. Journal of Artificial Intelligence in Architecture, 3(2): 61-71.
Lessig, L., 2002. The architecture of innovation. Duke Law Journal, 51(6): 1783.
Liu, J., Wang, Q., Fan, H., Wang, Y., Tang, Y., Qu, L., 2024. Residual denoising diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2773-2783.
Liu, X., Park, D.H., Azadi, S., Zhang, G., Chopikyan, A., Hu, Y., Shi, H., Rohrbach, A., Darrell, T., 2023. More control for free! İmage synthesis with semantic diffusion guidance. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2–7 January, Waikoloa, HI, USA, pp. 289–299.
Liu, X., Park, D.H., Azadi, S., Zhang, G., Chopikyan, A., Hu, Y., Shi, H., Rohrbach, A., Darrell, T., 2023. More control for free! İmage synthesis with semantic diffusion guidance. In Proceedings of the IEEE/CVFWinter Conference on Applications of Computer Vision, 2–7 January, Waikoloa, HI, USA, pp. 289–299.
Lugmayr, A., Danelljan, M., Romero, A., Yu, F., Timofte, R., Gool, L.V., 2022. Repaint: Inpainting using denoising diffusion probabilistic models.
Meng, C., He, Y., Song, Y., Song, J., Wu, J., Zhu, J.Y., Ermon, S., 2022. Sdedit: Guided image synthesis and editing with stochastic differential equations.
Nichol, A.Q., Dhariwal, P., 2021. Improved denoising diffusion probabilistic models. In Proceedings of the International Conference on Machine Learning, Virtual Event, 18–24 July, New York, NY, USA, pp. 8162–8171
Paananen, V., Oppenlaender, J., Visuri, A., 2024. Using text-to-image generation for architectural design ideation. International Journal of Architectural Computing, 22(3): 458-474.
Park, B.H., Hyun, K.H., 2022. Analysis of pairings of colors and materials of furnishings in interior design with a data-driven framework. Journal of Computational Design and Engineering, 9: 2419–2438.
Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., Chen, M., 2022. Hierarchical text-conditional image generation with clip latents. arXiv:2204.06125
Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B., 2021. Highresolution image synthesis with latent diffusion models.
Sinha, M., Fukey, L.N., 2022. Sustainable Interior Designing in the 21st Century A Review. ECS Trans. 107, 6801
Song, J., Meng, C., Ermon, S., 2020. Denoising diffusion implicit models. In Proceedings of the International Conference on Learning Representations, 26–30 April, Addis Ababa, Ethiopia.
Song, Y., Ermon, S., 2019. Generative modeling by estimating gradients of the data distribution. In: Wallach, H. Larochelle, H. Beygelzimer, A. d’Alché-Buc, F. Fox, E. Garnett, R. (Eds.), Advances in Neural Information Processing Systems, Red Hook, NY, USA.
Tanugraha, S., 2023. A review using artificial intelligence-generating images: Exploring material ideas from MidJourney to improve vernacular designs. Journal of Artificial Intelligence in Architecture, 2(2): 48-57.
Thakkar, K., Vadgama, K., Ranawat, K., Sharma, R., Mangla, M., 2024. Generative AI based Interior Designing. In 2024 International Conference on Electrical Electronics and Computing Technologies (ICEECT), IEEE. pp. 1-7.
Wang, Y., Liang, C., Huai, N., Chen, J., Zhang, C.A., 2023. Survey of Personalized Interior Design. In Computer Graphics Forum; Wiley Online Library: Hoboken, NJ, USA.
Zhu, J.Y., Krahenb uhl, P., Shechtman, E., Efros, A.A., 2016. Generative visual manipulation on the natural image manifold. In: B. Leibe, J. Matas, Sebe, N. Welling, M. (Eds.), Computer Vision – ECCV, Springer International Publishing, pp. 597–613.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Ejons International Journal on Mathematic, Engineering and Natural Sciences

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.