Abstract
Using vision-language models (VLMs) in web development presents a promising strategy to increase efficiency and unblock no-code solutions: by providing a screenshot or a sketch of a UI, a VLM could generate the code to reproduce it, for instance in a language like HTML. Despite the advancements in VLMs for various tasks, the specific challenge of converting a screenshot into a corresponding HTML has been minimally explored. We posit that this is mainly due to the absence of a suitable, high-quality dataset. This work introduces WebSight, a synthetic dataset consisting of 2 million pairs of HTML codes and their corresponding screenshots. We fine-tune a foundational VLM on our dataset and show proficiency in converting webpage screenshots to functional HTML code. To accelerate the research in this area, we open-source WebSight.
Abstract (translated)
在Web开发中使用视觉语言模型(VLMs)是一种提高效率并打破无代码解决方案的有前途的方法:通过提供UI截图或草图,VLM可以生成复制该截图的代码,例如在HTML语言中。尽管VLMs在各种任务上取得了进步,但将截图转换为相应的HTML的具体挑战却被大大缩小了。我们认为是由于缺乏一个合适、高质量的數據集。这项工作引入了WebSight,一个包含200万对HTML代码及其截图的合成数据集。我们在我们的数据集上微调了一个基本VLM,并展示了将网页截图转换为功能HTML代码的熟练程度。为了加速这一领域的研究,我们开源了WebSight。
URL
https://arxiv.org/abs/2403.09029