特点 | 描述 |
---|---|
基于语言模型 | LangChain 是一个专为语言模型(如 GPT-4)设计的开发框架。 |
模型输入/输出 | 支持灵活的模型输入和输出处理,可以适应各种不同的应用需求。 |
数据感知 | 能够将语言模型与其他数据源(如维基百科、个人文件等)连接,提供更丰富的信息和功能。 |
代理交互 | 允许语言模型与其运行环境(如网络搜索、电子邮件等)进行交互,实现更复杂和动态的响应。 |
开发AI应用 | 综合以上特点,开发者可以利用高级语言模型与外部数据和环境结合,开发多样的AI应用程序。 |
|
特点 | 描述 |
---|---|
基础功能 | LangChain 是专为基于语言模型的应用程序开发设计的框架。 |
数据感知能力 | 可以将语言模型连接到各种外部数据源,如个人文件、实时互联网数据,以及整合维基百科和谷歌等来源的信息。 |
代理性 | 除了数据连接,还允许语言模型与运行环境(如网络、电子邮件等)进行交互。 |
行动能力 | LangChain 的代理可以决定如何执行各种操作,不仅能提供信息,还能执行复杂任务如运行 Python 代码。 |
无缝集成 | 允许开发者轻松地将高级语言模型(如 GPT-4)与首选的数据源和环境连接起来。 |
多功能应用 | 综合以上特点,可用于开发一系列复杂和多功能的 AI 应用程序。 |
LangChain会编译并组织PDF中的数据。尽管我们提到的是PDF,数据源可以是多样的:文本文件、Microsoft Word文档、YouTube的转录,甚至是网站等等。LangChain会整理这些数据,并将其分割成可管理的块。一旦分割完成,这些块会存储在向量存储器中
步骤 | 描述 |
---|---|
文本向量化 | 输入文本(如问答、文章段落等)被转换为数值向量,通常通过词嵌入或句子嵌入等机器学习技术完成。 |
向量存储 | 转换后的数值向量被存储在专用的向量数据库或存储空间,以便于后续快速检索和对比。 |
用户查询处理 | 当用户发出查询或搜索时,该文本也被转换为数值向量,通常使用与文本向量化相同的机器学习技术。 |
相似性搜索 | 用户查询的数值向量与数据库中的向量进行相似性对比,常使用方法如余弦相似度。 |
检索信息 | 数据库识别与查询最相似的向量,并返回这些向量对应的原始文本,以提供与查询最相关的信息或答案。 |
生成LLM的完成 | 在特定应用场景下,找到最相似的文本向量后,可以通过语言模型(如LLM)生成更完整或更具解释性的回应。 |
应用领域 | 核心功能 | 主要特点 |
---|---|---|
YouTube剧本生成器 | 利用像GPT-4这样的语言模型生成YouTube视频剧本。 | - 用户定义的关键词或主题 <br/> - 结构化剧本(引言、正文、结论) <br/> - 包括视频元素如时间戳和字幕 |
网络研究工具 | 处理和总结大量的在线文本数据。 | - 处理新闻文章、研究论文和书籍 <br/> - 快速获取特定主题的概览 |
文本总结器 | 作为一个高级的文本总结工具。 | - 从多个数据源收集信息 <br/> - 生成一个精炼的总结 |
问答系统 | 基于上传的文档构建一个问答系统。 | - 允许上传文档、PDF或书籍 <br/> - 生成一个知识库 <br/> - 用户可以提出问题并从知识库中获取相应的答案 |
app.py
import os
from apikey import apikey
import streamlit as st
from langchain.llms import OpenAI
os.environ["OPENAI_API_KEY"] = apikey
st.title('Medium Article Generator')
topic = st.text_input('Input your topic of interest')
import os
from apikey import apikey
import streamlit as st
from langchain.llms import OpenAI
os.environ["OPENAI_API_KEY"] = apikey
st.title('Medium Article Generator')
topic = st.text_input('Input your topic of interest')
# 新增
llm = OpenAI(temperature=0.9)
if topic:
response = llm(topic)
st.write(response)
为了避免重复使用“给我一篇关于...的中等文章”,我们将把它作为提示模板处理。
import os
from apikey import apikey
import streamlit as st
from langchain.llms import OpenAI
os.environ["OPENAI_API_KEY"] = apikey
st.title('Medium Article Generator')
topic = st.text_input('Input your topic of interest')
# 从LangChain导入提示模板
from langchain.prompts import PromptTemplate
# 创建一个Prompt Template实例
# PromptTemplate将接受输入变量,在input_variables数组中指定了主题和语言作为输入变量
title_template = PromptTemplate(
input_variables=['topic', 'language'],
template='Give me medium article title on {topic} in {language}'
)
llm = OpenAI(temperature=0.9)
if topic:
response = llm(title_template.format(topic=topic, language='english'))
st.write(response)
我们已经使用LLMs来完成简单的任务:使用提示模板,将其提供给语言模型,并获得响应。然而,对于更复杂的任务,我们将需要链条,从最基本的开始:简单链条
import os
from apikey import apikey
import streamlit as st
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
os.environ["OPENAI_API_KEY"] = apikey
st.title('Medium Article Generator')
topic = st.text_input('Input your topic of interest')
language = st.text_input('Input language')
title_template = PromptTemplate(
input_variables=['topic', 'language'],
template='Give me medium article title on {topic} in {language}'
)
llm = OpenAI(temperature=0.9)
from langchain.chains import LLMChain
# verbose=True会详细输出过程
title_chain = LLMChain(llm=llm, prompt=title_template, verbose=True)
if topic:
# 我们的标题模板需要两个输入变量,所以我们使用了一个字典。
response = title_chain.run({'topic': topic, 'language': language})
st.write(response)
输出
> Entering new LLMChain chain...
Prompt after formatting:
Give me medium article title on investing in english
> Finished chain.
实现简化一些:输入、参数 、调用等方面
import os
from apikey import apikey
import streamlit as st
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
os.environ["OPENAI_API_KEY"] = apikey
st.title('Medium Article Generator')
topic = st.text_input('Input your topic of interest')
title_template = PromptTemplate(
input_variables=['topic'],
template='Give me medium article title on {topic}'
)
llm = OpenAI(temperature=0.9)
from langchain.chains import LLMChain
# verbose=True会详细输出过程
title_chain = LLMChain(llm=llm, prompt=title_template, verbose=True)
if topic:
# 一个参数只需要一个字符串变量
response = title_chain.run(topic)
st.write(response)
对于更复杂的任务,我们可以利用顺序链。
import os
from apikey import apikey
import streamlit as st
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain, SimpleSequentialChain
os.environ ["OPENAI_API_KEY"] = apikey
st.title('Medium Article Generator')
topic = st.text_input('Input your topic of interest')
title_template = PromptTemplate(
input_variables = ['topic'],
template = 'Give me medium article title on {topic}'
)
article_template = PromptTemplate(
input_variables = ['title'],
template = 'Give me medium article for {title}'
)
llm = OpenAI(temperature=0.9)
title_chain = LLMChain(llm=llm, prompt=title_template, verbose=True)
article_chain = LLMChain(llm=llm, prompt=article_template, verbose=True)
if topic:
# 首先,根据主题生成标题。
title_response = title_chain.run({'topic': topic})
generated_title = title_response if isinstance(title_response, str) else title_response.get('content', '')
if generated_title:
# 然后,根据生成的标题生成文章内容。
article_response = article_chain.run({'title': generated_title})
st.write(article_response if isinstance(article_response, str
简单的顺序链代表一系列链条。每个链条都有一个输入和一个输出,其中一个链条的输出作为下一个链条的输入。
我们现有的语言模型使用了GPT-3模型,但还有更新的模型,如GPT-3.5 Turbo和GPT-4。为此,我们将使用ChatOpenAI构造器
import os
from apikey import apikey
import streamlit as st
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain, SimpleSequentialChain
os.environ ["OPENAI_API_KEY"] = apikey
st.title('Medium Article Generator')
topic = st.text_input('Input your topic of interest')
title_template = PromptTemplate(
input_variables = ['topic'],
template = 'Give me medium article title on {topic}'
)
article_template = PromptTemplate(
input_variables = ['title'],
template = 'Give me medium article for {title}'
)
llm = OpenAI(temperature=0.9)
title_chain = LLMChain(llm=llm, prompt=title_template, verbose=True)
llm2 = ChatOpenAI(model_name='gpt-3.5-turbo', temperature=0.9)
article_chain = LLMChain(llm=llm2, prompt=article_template, verbose=True)
overall_chain =SimpleSequentialChain(chains=[title_chain, article_chain], verbose=True)
if topic:
response = overall_chain.run(topic)
st.write(response)
运行结果
命令行输出
> Entering new LLMChain chain...
Prompt after formatting:
Give me medium article for
"5 Strategies for Investing Wisely in Today's Markets"
> Finished chain.
I'm sorry, but I cannot provide you with a specific Medium article as I am an AI language model and do not have direct access to the internet or a database of articles. However, I can offer you a summary of five strategies for investing wisely in today's markets:
1. Diversify your portfolio: Diversification involves spreading your investments across different asset classes, sectors, and regions. By diversifying, you reduce the risk of one particular investment negatively impacting your overall portfolio. This strategy allows you to potentially earn profits while minimizing potential losses.
2. Conduct thorough research: Knowledge is key when it comes to investing wisely. Before investing, take the time to do thorough research on the companies or assets you plan to invest in. Analyze their financials, industry trends, and competitive advantages. Additionally, stay updated on market news and trends to make informed decisions.
3. Set clear investment goals: Define your investment goals and time horizon. Are you investing for short-term gains or long-term wealth accumulation? By setting clear goals, you can tailor your investment strategy accordingly and avoid making impulsive decisions based on short-term market fluctuations.
4. Have a long-term perspective: Investing wisely often involves a long-term perspective. Trying to time the market and make short-term gains can be speculative and risky. Instead, focus on investing in solid companies or assets that have long-term growth potential. Warren Buffett famously said, "The stock market is a device for transferring money from the impatient to the patient."
5. Control your emotions: Investing in volatile markets can be nerve-wracking, leading to emotional decision making. It's crucial to control your emotions and avoid making impulsive investment choices based on fear or greed. Develop a disciplined approach, stick to your investment plan, and avoid being swayed by short-term market sentiment.
Remember, investing always carries some level of risk, and it's advisable to consult with a financial advisor or conduct further research before making any investment decisions.
> Finished chain.
语言模型无疑非常强大。然而,它们有时会在一些基本应用中轻松处理的任务上遇到困难。它们可能在逻辑、数学计算和与外部组件的通信方面出现问题。例如,如果你要求ChatGPT获取关于LangChain代理的最新文章,它会失败,因为ChatGPT的训练只延伸到2021年9月。
我们将创建一个维基百科研究工具,以展示代理功能及其应用。
代理需要访问特定工具,例如 Google 或维基百科搜索功能。通过将 GPT 模型与这些工具相结合
langchain代理工具:https://python.langchain.com/docs/integrations/tools/ 代理类型:https://python.langchain.com/docs/modules/agents/agent_types/
我们将为代理配备两个工具:维基百科和 llm-math 工具,从而实现基本的数学运算。我们还必须为“load_tools”提供我们的语言模型。
import os
from apikey import apikey
from langchain.llms import OpenAI
from langchain.agents import load_tools, initialize_agent, AgentType
os.environ["OPENAI_API_KEY"] = apikey
# 我们将温度设为0,因为我们希望得到一个没有幻觉的客观研究工具
llm = OpenAI(temperature=0.0)
tools = load_tools(['wikipedia', 'llm-math'],llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
prompt = input('Input Wikipedia Research Task\n')
agent.run(prompt)
应用我们对LangChain和大型语言模型的学习,构建一个问答应用程序,用于我们的文档。用户可以上传各种文件类型,包括PDF、Microsoft Word文档和文本文件。然后它连接到OpenAI模型,一旦文档上传完成,您就可以开始提问了。
依赖安装
pip install chromadb
ChromaDB是一个开源向量数据库。向量数据库允许应用程序使用向量嵌入。这些嵌入将各种格式(例如文本、图像、视频、音频)转换为数值表示。这使得人工智能能够理解并赋予这些表示意义。这些数值表示被称为向量,向量数据库擅长存储和查询这种非结构化数据,特别是在语义搜索期间。
import os
from apikey import apikey
import streamlit as st # 用于创建我们的UI前端
from langchain.chat_models import ChatOpenAI # 用于GPT3.5/4模型
from langchain.document_loaders import TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.chains import RetrievalQA
os.environ["OPENAI_API_KEY"] = apikey
st.title('Chat with Document') # 我们网页的标题
loader = TextLoader('./constitution.txt') # 加载文本文档
documents = loader.load()
print(documents) # 打印以确保文档正确加载
接下来,我们需要将文档分成多个块,因为如果文本太长,无法加载到模型中。我们使用RecursiveCharacterTextSplitter将文本分成更小的、语义相关的块(每个块中的句子在语义上相关)。添加以下代码:
os.environ["OPENAI_API_KEY"] = apikey
st.title('Chat with Document') # 我们网页的标题
loader = TextLoader('./constitution.txt')
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
chunks = text_splitter.split_documents(documents)
st.write(chunks[0])
st.write(chunks[1])
我们使用默认值1000用于块大小,200用于块重叠。如果块大小太小或太大,将导致不准确的搜索结果或错过了提取相关内容的机会。
一旦我们准备好了我们的块,我们将使用OpenAI的广泛嵌入库来创建我们的嵌入,该库是根据整个互联网的文本语料库构建的。
嵌入用于衡量文本字符串的相关性,通常用于搜索和聚类。每个嵌入都是由浮点数向量组成,其中两个向量之间的距离衡量它们的相关性。
每个嵌入是一组浮点数向量,其中两个向量之间的距离衡量它们的相关性。
词嵌入的想法是将词语或句子映射到向量。然后,这些向量存储在数据库中。可以将新的句子与这些嵌入进行比较,以确定它们之间的相关性。
在嵌入就位后,我们将初始化向量数据库。
告诉RetrievalQA链,使用向量存储并执行问题和答案检索。然后,RetrievalQA链从向量数据库中查找相关向量,然后要求链根据用户的问题返回响应。
依赖安装
pip install tiktoken
import os
from apikey import apikey
import streamlit as st # 用于创建我们的用户界面前端
from langchain.chat_models import ChatOpenAI # 用于GPT3.5/4模型
from langchain.document_loaders import TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.chains import RetrievalQA
os.environ["OPENAI_API_KEY"] = apikey
st.title('与文档聊天') # 我们网页的标题
loader = TextLoader('./constitution.txt')
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
chunks = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vector_store = Chroma.from_documents(chunks, embeddings)
# 初始化OpenAI实例
llm = ChatOpenAI(model='gpt-3.5-turbo', temperature=0)
retriever=vector_store.as_retriever()
chain = RetrievalQA.from_chain_type(llm, retriever=retriever)
# 从用户输入获取问题
question = st.text_input('输入你的问题')
if question:
# 运行链
response = chain.run(question)
st.write(response)
问题被用来从向量数据库中检索相关文档。它通过与问题中的关键词具有高相似度的文档来确定相关文档。一旦这些文档被获取,它们会与模型一起用于生成回答。
import os
from apikey import apikey
import streamlit as st # 用于创建我们的用户界面前端
from langchain.chat_models import ChatOpenAI # 用于GPT3.5/4模型
from langchain.document_loaders import TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.chains import ConversationalRetrievalChain
os.environ["OPENAI_API_KEY"] = apikey
st.title('与文档聊天') # 我们网页的标题
loader = TextLoader('./constitution.txt')
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
chunks = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vector_store = Chroma.from_documents(chunks, embeddings)
# 初始化OpenAI实例
llm = ChatOpenAI(model='gpt-3.5-turbo', temperature=0)
retriever=vector_store.as_retriever()
crc = ConversationalRetrievalChain.from_llm(llm, retriever)
# 从用户输入获取问题
question = st.text_input('输入你的问题')
if question:
if 'history' not in st.session_state:
st.session_state['history'] = []
response = crc.run({
'question': question,
'chat_history': st.session_state['history']
})
st.session_state['history'].append((question, response))
st.write(response)
假设您希望显示聊天记录。
import os
from apikey import apikey
import streamlit as st # 用于创建我们的用户界面前端
from langchain.chat_models import ChatOpenAI # 用于GPT3.5/4模型
from langchain.document_loaders import TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.chains import ConversationalRetrievalChain
os.environ["OPENAI_API_KEY"] = apikey
st.title('与文档聊天') # 我们网页的标题
loader = TextLoader('./constitution.txt')
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
chunks = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vector_store = Chroma.from_documents(chunks, embeddings)
# 初始化OpenAI实例
llm = ChatOpenAI(model='gpt-3.5-turbo', temperature=0)
retriever=vector_store.as_retriever()
crc = ConversationalRetrievalChain.from_llm(llm, retriever)
# 从用户输入获取问题
question = st.text_input('输入你的问题')
if question:
if 'history' not in st.session_state:
st.session_state['history'] = []
response = crc.run({
'question': question,
'chat_history': st.session_state['history']
})
st.session_state['history'].append((question, response))
st.write(response)
st.write(st.session_state['history']) # 输出聊天历史记录
聊天记录似乎结构化为一个二维数组。为了更有结构的输出,请考虑使用for循环。
for prompts in st.session_state ['history']:
st.write("Question: " + prompts[0])
st.write("Answer: " + prompts[1])
清除历史记录方法确保每次上传新文件时,之前的聊天记录都会被清除。
def clear_history():
if 'history' in st.session_state:
del st.session_state['history']
只接受PDF、DOCX和TXT类型的文件。一旦文件上传完成,文件将存储在uploaded_file变量中。
用户在准备上传所选择的文档时将激活该按钮。按钮被点击时,将触发clear_history方法。
uploaded_file = st.file_uploader('Upload file:',type=['pdf','docx', 'txt'])
add_file = st.button('Add File', on_click=clear_history)
文件上传完成,它的内容将以二进制格式读取并存储在
uploaded_file = st.file_uploader('Upload file:',type=['pdf','docx', 'txt'])
add_file = st.button('Add File', on_click=clear_history)
if uploaded_file and add_file:
bytes_data = uploaded_file.read()
file_name = os.path. join('./', uploaded_file.name)
with open (file_name, 'wb') as f:
f.write(bytes_data)
loader = TextLoader(file_name)
documents = loader.load()
import os
from apikey import apikey
import streamlit as st # 用于创建我们的用户界面前端
from langchain.chat_models import ChatOpenAI # 用于GPT3.5/4模型
from langchain.document_loaders import TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.chains import ConversationalRetrievalChain
os.environ["OPENAI_API_KEY"] = apikey
def clear_history():
if 'history' in st.session_state:
del st.session_state['history']
st.title('Chat with Document') # 我们网页的标题
uploaded_file = st.file_uploader('Upload file:',type=['pdf','docx', 'txt'])
add_file = st.button('Add File', on_click=clear_history)
if uploaded_file and add_file:
bytes_data = uploaded_file.read()
file_name = os.path. join('./', uploaded_file.name)
with open (file_name, 'wb') as f:
f.write(bytes_data)
loader = TextLoader(file_name)
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
chunks = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vector_store = Chroma.from_documents(chunks, embeddings)
# 初始化OpenAI实例
llm = ChatOpenAI(model='gpt-3.5-turbo', temperature=0)
retriever=vector_store.as_retriever()
crc = ConversationalRetrievalChain.from_llm(llm, retriever)
st.session_state.crc = crc
# success message when file is chunked & embedded successfully
st.success('File uploaded, chunked and embedded successfully')
# 从用户输入获取问题
question = st.text_input('输入你的问题')
if question:
if 'crc' in st.session_state:
crc = st.session_state.crc
if 'history' not in st.session_state:
st.session_state['history'] = []
response = crc.run({
'question': question,
'chat_history': st.session_state['history']
})
st.session_state['history'].append((question, response))
st.write(response)
# st.write(st.session_state['history']) # 输出聊天历史记录
for prompts in st.session_state ['history']:
st.write("Question: " + prompts[0])
st.write("Answer: " + prompts[1])
name, extension = os.path.splitext(file_name)
if extension == '.pdf':
from langchain.document_loaders import PyPDFLoader
loader = PyPDFLoader(file_name)
elif extension == '.docx':
from langchain.document_loaders import Docx2txtLoader
loader = Docx2txtLoader(file_name)
elif extension == '.txt':
from langchain.document_loaders import TextLoader
loader = TextLoader(file_name)
else:
st.write('Document format is not supported!')
当用户上传文件时,在分块和嵌入阶段会有可感知的等待时间。通过添加一个带有“正在读取、分块和嵌入文件”的消息的旋转器,用户将有一个视觉提示说明处理
LangChain文档加载器
https://python.langchain.com/docs/integrations/document_loaders/youtube_transcript
pip install youtube-transcript-api
import os
from apikey import apikey
import streamlit as st # 用于创建我们的用户界面前端
from langchain.chat_models import ChatOpenAI # 用于GPT3.5/4模型
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.chains import ConversationalRetrievalChain
from langchain.document_loaders import YoutubeLoader
os.environ["OPENAI_API_KEY"] = apikey
def clear_history():
if 'history' in st.session_state:
del st.session_state['history']
st.title('Chat with Youtube')
youtube_url = st.text_input('Input your Youtube URL')
if youtube_url:
loader = YoutubeLoader.from_youtube_url(youtube_url)
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
chunks = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vector_store = Chroma.from_documents(chunks, embeddings)
# 初始化OpenAI实例
llm = ChatOpenAI(model='gpt-3.5-turbo', temperature=0)
retriever=vector_store.as_retriever()
crc = ConversationalRetrievalChain.from_llm(llm, retriever)
st.session_state.crc = crc
# success message when file is chunked & embedded successfully
st.success('File uploaded, chunked and embedded successfully')
# 从用户输入获取问题
question = st.text_input('输入你的问题')
if question:
if 'crc' in st.session_state:
crc = st.session_state.crc
if 'history' not in st.session_state:
st.session_state['history'] = []
response = crc.run({
'question': question,
'chat_history': st.session_state['history']
})
st.session_state['history'].append((question, response))
st.write(response)
# st.write(st.session_state['history']) # 输出聊天历史记录
for prompts in st.session_state ['history']:
st.write("Question: " + prompts[0])
st.write("Answer: " + prompts[1])