Jinyuan Sun is a Quant Researcher in DBS PTE LTD. He holds a Master's Degree from National University of Singapore (NUS) in Computer Engineering Certificate . He graduate from Beijing University of Posts and Telecommunications (BUPT)Certificate» and Queen Mary University of London (QMUL)Certificate» in June 2024 with a Bachelor of Science (Engineering) with FIRST CLASS Honors degree majoring Internet of Things EngineeringTranscript». He also got a minor degree of Artificial Intelligence at Yepeida Innovation Institute of BUPT. With an outstanding GPA of 3.68/4.0, Jinyuan has actively engaged in cutting-edge research projects, applying AI, Machine Learning (ML), and Computer Vision techniques to address real-world challenges.
He was awarded the National Scholarship of China (the highest undergraduate honor) in 2022, the Ministry of Education of China-Huawei Future Star Award in 2022, and the Second Scholarship of BUPT in 2021 and 2022. Actively participating in contests, he achieved notable recognition, including the National Bronze Award in the 8th China College Students' 'Internet Plus' Innovation and Entrepreneurship Competition (2022), Second prize in Beijing Division of the National College Students Mechanical Innovation Design Competition (2022), Successful Participant in the Mathematical Contest in Modeling (2022), and First prize in Beijing Division of the Innovation Creativity Entrepreneurship competition (2023).
His primary research interests includeQuant Strategy, Trading systems, LLM with applications in finance, Machine Learning and Deep Learning. CV (PDF)»
Research Projects
Abstract: By applying the diffusion models in the latent space of powerful pre-trained autoencoders, stable diffusion models has achieved great success in text-to-image generation, which can generate high-quality images consistent with input texts. However, the pre-trained stable diffusion can hardly be fintuned on a dataset in a specific domain~(e.g, watercolor and chinese painting), because the description of corresponding images are hardly derived. In this paper, we propose a retrieval-based method to finetune the pre-trained stable diffusion model on a dataset in a specific domain, which contains only images without texture descriptions. Specifically, we adopt a large-scale pre-trained visual-language model~(e.g.,CLIP) to retrieve a reference image in training set for each of input image in the training phase, which is most similar to the input image (except itself) in semantic space of CLIP. We then take the reference image as condition instead of texture description for training. In the test phase, we retrieve the top-1 result from candidates with given text, and adopt the retrieved image as condition to generate image. To better assess text-to-image generation models in a specific domain, we release a high-quality dataset consists of 2986 chinese painting images. Extensive results on our proposed dataset demonstrate the superior performance of our proposed method.
Abstract: We present a comprehensive study on a new task named image color aesthetics assessment (ICAA), which aims to assess color aesthetics based on human perception. ICAA is important for various applications such as imaging measurement and image analysis. However, due to the highly diverse aesthetic preferences and numerous color combinations, ICAA presents more challenges than conventional image quality assessment tasks.To advance ICAA research, 1. we propose a baseline model called the Delegate Transformer, which not only deploys deformable transformers to adaptively allocate interest points, but also learns human color space segmentation behavior by the dedicated module.2. We elaborately build a color-oriented dataset, ICAA17K, containing 17K images, covering 30 popular color combinations, 80 devices and 50 scenes, with each image densely annotated by more than 1,500 people. 3. We develop a large-scale benchmark of 15 methods, the most comprehensive one thus far based on two datasets, SPAQ and ICAA17K. Our work, not only achieves state-of-the-art performance, but more importantly offers the community a roadmap to explore solutions for ICAA. Code and dataset are available in the supplementary material.
Abstract: Humans possess a remarkable capacity for continuous learning and adaptation throughout their lifetimes. This ability is often referred to as "never-ending learning," also known as continual learning or lifelong learning. Never-ending learning entails the ongoing development of increasingly complex behaviors and the acquisition of intricate skills to complement those already acquired. It involves the capacity to reapply, adapt, and generalize these abilities to novel situations. In this survey, I delve into the fundamental concept of Continual Reinforcement Learning (CRL) and offer a concise introduction to the world model and the mechanisms employed to bolster an agent's lifelong learning journey.
Individual Projects
Group Projects
Main Contributions:
Main Contributions: