RM PRESNTATION amc engineering college.pptx

VikasTiwari846234 18 views 15 slides Jul 21, 2024
Slide 1
Slide 1 of 15
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15

About This Presentation

it's on research methodology


Slide Content

ELECTRONICS AND COMMUNICATION ENGINEERING Chat2VIS: Generating Data Visualizations via Natural Language Using ChatGPT, Codex and GPT-3 Large Language Models (MODIFIED) Members: NAME: USN: VIKAS KUMAR TIWARI 1AM21EC097 VISHWANATH REDDY B 1AM21EC100 SYED YOUSUF MASOOD 1AM21EC090 SHASWAT SRIVASTAV 1AM21EC076 SYED SAHIL ASIF 1AM21EC089

ABSTRACT The paper proposes utilizing pre-trained expansive dialect models (LLMS) like CHATGPT and GPT-3 to change over normal dialect into code for visualizations, introducing CHAT2VIS. It emphasizes viable incite building for way better dialect understanding and less difficult arrangements. CHAT2VIS illustrates solid visualization era from dubious queries, lessening NLI framework advancement costs whereas making strides deduction compared to conventional NLP approaches. The consider moreover addresses information security and security concerns and compares the execution of GPT-3, CODEX, and CHATGPT over different case studies.

Problem statement The document introduces a system called  CHAT2VIS  that aims to generate data visualizations from natural language queries. Existing approaches for creating visualizations from language are complex and costly. CHAT2VIS leverages advanced large language models (LLMS) like CHATGPT, GPT-3, and codex to simplify this process. The paper evaluates CHAT2VIS’S performance and discusses its potential impact on making data visualization more accessible.

MOTIVATION Making Visualizations Easier: The authors want to find a way to create data visualizations (like charts and graphs) from everyday language. Imagine asking a computer to make a chart just by describing it in words! Existing Challenges: Current methods for doing this are complicated and expensive. They involve complex rules or custom-made computer programs. New Approach: The authors propose a fresh idea called Chat2VIS. It uses advanced language models (like GPT-3) to understand our language and turn it into visualizations. Testing and Comparison: They test Chat2VIS and compare it with other methods to see how well it works. Looking Ahead: The authors also talk about what could be improved and where this kind of technology might be useful in the future. Imagine a world where anyone can easily create charts just by talking to their computer!

INTRODUCTION Generating visualizations from natural language is a sought-after goal. Natural language interfaces (NLIs) are emerging as a powerful tool for achieving this. NLIs allow users to create visualizations using natural language queries, eliminating the need for programming. This makes data exploration more accessible and intuitive. The ultimate goal is to enable users to ask questions like "show me the sales trend" and get visualizations in response.

Users need to translate their thoughts into tool operations. NLIs aim to make visualization tools more accessible Challenges and potential of NLIs for data visualization. Nl2vis complexities addressed by llms like GPT-3 and CHATGPT. Proposed end-to-end nl2vis system using llms. Evaluation of different LLM models and their advantages (accuracy, efficiency, privacy). Publicly available online application for NL2VIS experimentation.

Literature review Provides an overview of natural language to visualization techniques, covering symbolic NLP and deep learning methods, and identifies research gaps. Focuses on deep learning approaches for NL2VIS tasks, discussing strengths, weaknesses, and recent advancements in transformer architectures. Offers insights into the evolution of NL2VIS techniques, from symbolic NLP to deep learning models, and discusses the potential of pre-trained language models. Analyzes various deep learning architectures for NL2VIS, including their applications and recent advancements in pre-trained language models. Conducts a comparative study between symbolic and deep learning approaches for NL2VIS tasks, examining performance and discussing future research directions.

Literature Selection: Identified relevant literature using keywords related to natural language processing, visualization, NL2VIS, symbolic NLP, and deep learning from academic databases. Inclusion Criteria: Selected peer-reviewed papers focusing on NL2VIS techniques, published within the last five years, and excluding non-English publications. Exclusion Criteria: Excluded papers lacking empirical data or rigorous methodologies. Data Extraction: Extracted methodologies, model types (symbolic NLP, deep learning), datasets, evaluation metrics, and key findings from selected papers. Analysis: Identified how symbolic NLP methods (rule-based, grammar-based) and deep learning approaches (including transformers) are utilized for NL2VIS tasks. Explored differences in flexibility, complexity, and performance between symbolic NLP and deep learning methods. methodology

Comparison: Compared how symbolic NLP relies on predefined rules and grammatical structures, while deep learning models learn directly from data. Synthesis: Integrated findings to provide an overview of NL2VIS methodologies, highlighting trends, advancements, and research gaps. Discussion: Discussed implications of methodologies on NL2VIS effectiveness, challenges, and future research opportunities. Limitations: Acknowledged potential biases from selection criteria and reliance on published literature.

conclusion This paper presents a novel end-to-end solution, Chat2VIS, for converting free-form natural language into visualizations using state-of-the-art Large Language Models (LLMs) like ChatGPT, GPT-3, and Codex. It demonstrates that pre-trained LLMs coupled with well-engineered prompts, offer an efficient, reliable, and accurate approach to NL2VIS. The system automatically selects chart types and can understand vague or malformed user queries, while also preserving data privacy and security. The findings highlight the potential of LLMs to enhance existing NLIs for visualization, providing simpler and more robust solutions without the need for defining grammars or custom domain-specific language models. This study contributes valuable insights for researchers and practitioners in data visualization and NLIs, offering simpler and more accessible ways to convey data and insights to a wider audience.

REFERENCES [1] A. Narechania , A. Srinivasan, and J. Stasko , ‘‘NL4DV: A toolkit for generating analytic specifications for data visualization from natural language queries,’’ IEEE Trans. Vis. Comput . Graphics, vol. 27, no. 2, pp. 369–379, Feb. 2021. [2] L. Shen, E. Shen, Y. Luo, X. Yang, X. Hu, X. Zhang, Z. Tai, and J. Wang, ‘‘Towards natural language interfaces for data visualization: A survey,’’ 2021, arXiv:2109.03506. [3] Y. Wang, Z. Hou, L. Shen, T. Wu, J. Wang, H. Huang, H. Zhang, and D. Zhang, ‘‘Towards natural language-based visualization authoring,’’ IEEE Trans. Vis. Comput . Graphics, vol. 29, no. 1, pp. 1222–1232, Jan. 2022. [4] Y. Luo, N. Tang, G. Li, J. Tang, C. Chai, and X. Qin, ‘‘Natural language to visualization by neural machine translation,’’ IEEE Trans. Vis. Comput . Graphics, vol. 28, no. 1, pp. 217–226, Jan. 2022. [5] Y. Song, X. Zhao, R. C.-W. Wong, and D. Jiang, ‘‘ RGVisNet : A hybrid retrieval-generation neural framework towards automatic data visualization generation,’’ in Proc. 28th ACM SIGKDD Conf. Knowl . Discovery Data Mining, Aug. 2022, pp. 1646–1655. [6] Q. Wang, Z. Chen, Y. Wang, and H. Qu, ‘‘A survey on ML4VIS: Applying machine learning advances to data visualization,’’ IEEE Trans. Vis. Comput . Graphics, vol. 28, no. 12, pp. 5134–5153, Dec. 2022. [7] T. B. Brown et al., ‘‘Language models are few-shot learners,’’ 2020, arXiv:2005.14165. [8] A. Vaswani, N. Shazeer , N. Parmar, J. Uszkoreit , L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin , ‘‘Attention is all you need,’’ in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 6000–6010. [9] H. Voigt, M. Meuschke , K. Lawonn , and S. Zarrieß , ‘‘Challenges in designing natural language interfaces for complex visual models,’’ in Proc. 1st Workshop Bridging Hum.– Comput . Interact. Natural Lang. Process., 2021, pp. 66–73. [10] Y. Luo, N. Tang, G. Li, C. Chai, W. Li, and X. Qin, ‘‘Synthesizing natural language to visualization (NL2VIS) benchmarks from NL2SQL benchmarks,’’ in Proc. Int. Conf. Manage. Data, China, Jun. 2021, pp. 1235–1247.

[11] J. Tang, Y. Luo, M. Ouzzani , G. Li, and H. Chen, ‘‘ Sevi : Speech- tovisualization through neural machine translation,’’ in Proc. Int. Conf. Manage. Data, Jun. 2022, pp. 2353–2356. [12] YoloPandas Developers. (2023). YoloPandas . Python Package Index ( PyPI ). [Online]. Available: https://pypi.org/project/yolopandas/ [13] G. Liu, X. Li, J. Wang, M. Sun, and P. Li, ‘‘Extracting knowledge from web text with Monte Carlo tree search,’’ in Proc. Web Conf., Apr. 2020, pp. 2585–2591. [14] Y. Sun, J. Leigh, A. Johnson, and S. Lee, ‘‘Articulate: A semi-automated model for translating natural language queries into meaningful visualizations,’’ in Proc. 10th Int. Symp . Smart Graph. Smart Graph., Banff, AB, Canada: Springer, Jun. 2010, pp. 184– 195. [15] T. Gao, M. Dontcheva , E. Adar, Z. Liu, and K. G. Karahalios , ‘‘ DataTone : Managing ambiguity in natural language interfaces for data visualization,’’ in Proc. 28th Annu. ACM Symp . User Interface Softw . Technol., Nov. 2015, pp. 489–500. [16] V. Setlur , S. E. Battersby, M. Tory, R. Gossweiler , and A. X. Chang, ‘‘ Eviza : A natural language interface for visual analysis,’’ in Proc. 29th Annu. Symp . User Interface Softw . Technol., Oct. 2016, pp. 365–377. [17] X. Qin, Y. Luo, N. Tang, and G. Li, ‘‘ DeepEye : Visualizing your data by keyword search,’’ in Proc. EDBT, 2018, pp. 441–444. [18] B. Yu and C. T. Silva, ‘‘ FlowSense : A natural language interface for visual data exploration within a dataflow system,’’ IEEE Trans. Vis. Comput . Graphics, vol. 26, no. 1, pp. 1–11, Jan. 2020. [19] E. Loper and S. Bird, ‘‘NLTK: The natural language toolkit,’’ 2002, arXiv:cs /0205028. [20] C. Manning, M. Surdeanu , J. Bauer, J. Finkel, S. Bethard , and D. McClosky , ‘‘The Stanford CoreNLP natural language processing toolkit,’’ in Proc. 52nd Annu. Meeting Assoc. Comput . Linguistics, Syst. Demonstrations, 2014, pp. 55–60. [21] C. Liu, Y. Han, R. Jiang, and X. Yuan, ‘‘ ADVISor : Automatic visualization answer for natural-language question on tabular data,’’ in Proc. IEEE 14th Pacific Vis. Symp . ( PacificVis ), Apr. 2021, pp. 11–20. [22] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, ‘‘BERT: Pre-training of deep bidirectional transformers for language understanding,’’ in Proc. Conf. North Amer. Chapter Assoc. Comput . Linguistics, Hum. Lang. Technol., vol. 1, 2018, pp. 4171–4186. [23] Y. Luo, J. Tang, and G. Li, ‘‘ NvBench : A large-scale synthesized dataset for cross-domain natural language to visualization task,’’ 2021, arXiv:2112.12926.

THANK YOU