query='expression language' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='completness', value='Very'), Operation(operator=<Operator.OR: 'or'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='talks_about_retriever', value=True), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='talks_about_vectorstore', value=True)])]) limit=None
[Document(page_content='# Code writing\n\nExample of how to use LCEL to write Python code.\n\n```python\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate\nfrom langchain.schema.output_parser import StrOutputParser\nfrom langchain.utilities import PythonREPL\n```\n\n> **API Reference:**\n> - [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html)\n> - [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html)\n> - [SystemMessagePromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.SystemMessagePromptTemplate.html)\n> - [HumanMessagePromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html)\n> - [StrOutputParser](https://api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.StrOutputParser.html)\n> - [PythonREPL](https://api.python.langchain.com/en/latest/utilities/langchain.utilities.python.PythonREPL.html)\n\n```python\ntemplate = """Write some python code to solve the user\'s problem. \n\nReturn only python code in Markdown format, e.g.:\n\n```python\n....\n```"""\nprompt = ChatPromptTemplate.from_messages(\n [("system", template), ("human", "{input}")]\n)\n\nmodel = ChatOpenAI()\n```\n\n```python\ndef _sanitize_output(text: str):\n _, after = text.split("```python")\n return after.split("```")[0]\n```\n\n```python\nchain = prompt | model | StrOutputParser() | _sanitize_output | PythonREPL().run\n```\n\n```python\nchain.invoke({"input": "whats 2 plus 2"})\n```', metadata={'code_snippet': True, 'completness': 'Very', 'contains_markdown_table': True, 'description': True, 'language': 'en', 'source': 'https://python.langchain.com/docs/expression_language/cookbook/code_writing', 'talks_about_chain': True, 'talks_about_expression_language': True, 'talks_about_retriever': True, 'talks_about_vectorstore': True, 'title': 'Code writing | 🦜️🔗 Langchain'}),
Document(page_content='```\n\n```python\nchain = (\n {"context": retriever, "question": RunnablePassthrough()} \n | prompt \n | model \n | StrOutputParser()\n)\n```\n\n```python\nchain.invoke("where did harrison work?")\n```\n\n```python\ntemplate = """Answer the question based only on the following context:\n{context}\n\nQuestion: {question}\n\nAnswer in the following language: {language}\n"""\nprompt = ChatPromptTemplate.from_template(template)\n\nchain = {\n "context": itemgetter("question") | retriever, \n "question": itemgetter("question"), \n "language": itemgetter("language")\n} | prompt | model | StrOutputParser()\n```\n\n```python\nchain.invoke({"question": "where did harrison work", "language": "italian"})\n```', metadata={'code_snippet': True, 'completness': 'Very', 'contains_markdown_table': False, 'description': True, 'language': 'en', 'source': 'https://python.langchain.com/docs/expression_language/cookbook/retrieval', 'talks_about_chain': True, 'talks_about_expression_language': True, 'talks_about_retriever': True, 'talks_about_vectorstore': False, 'title': 'RAG | 🦜️🔗 Langchain'}),
Document(page_content='## Using Constitutional Principles\u200b\n\nCustom rubrics are similar to principles from [Constitutional AI](https://arxiv.org/abs/2212.08073). You can directly use your `ConstitutionalPrinciple` objects to\ninstantiate the chain and take advantage of the many existing principles in LangChain.\n\n```python\nfrom langchain.chains.constitutional_ai.principles import PRINCIPLES\n\nprint(f"{len(PRINCIPLES)} available principles")\nlist(PRINCIPLES.items())[:5]\n```\n\n```python\nevaluator = load_evaluator(\n EvaluatorType.CRITERIA, criteria=PRINCIPLES["harmful1"]\n)\neval_result = evaluator.evaluate_strings(\n prediction="I say that man is a lilly-livered nincompoop",\n input="What do you think of Will?",\n)\nprint(eval_result)\n```\n\n## Configuring the LLM\u200b\n\nIf you don\'t specify an eval LLM, the `load_evaluator` method will initialize a `gpt-4` LLM to power the grading chain. Below, use an anthropic model instead.\n\n```python\n# %pip install ChatAnthropic\n# %env ANTHROPIC_API_KEY=<API_KEY>\n```\n\n```python\nfrom langchain.chat_models import ChatAnthropic\n\nllm = ChatAnthropic(temperature=0)\nevaluator = load_evaluator("criteria", llm=llm, criteria="conciseness")\n```\n\n> **API Reference:**\n> - [ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.anthropic.ChatAnthropic.html)\n\n```python\neval_result = evaluator.evaluate_strings(\n prediction="What\'s 2+2? That\'s an elementary question. The answer you\'re looking for is that two and two is four.",\n input="What\'s 2+2?",\n)\nprint(eval_result)\n```', metadata={'code_snippet': True, 'completness': 'Very', 'contains_markdown_table': True, 'description': True, 'language': 'en', 'source': 'https://python.langchain.com/docs/guides/evaluation/string/criteria_eval_chain', 'talks_about_chain': True, 'talks_about_expression_language': True, 'talks_about_retriever': True, 'talks_about_vectorstore': True, 'title': 'Criteria Evaluation | 🦜️🔗 Langchain'}),
Document(page_content='# You can load by enum or by raw python string\nevaluator = load_evaluator(\n "embedding_distance", distance_metric=EmbeddingDistance.EUCLIDEAN\n)\n```\n\n## Select Embeddings to Use\u200b\n\nThe constructor uses `OpenAI` embeddings by default, but you can configure this however you want. Below, use huggingface local embeddings\n\n```python\nfrom langchain.embeddings import HuggingFaceEmbeddings\n\nembedding_model = HuggingFaceEmbeddings()\nhf_evaluator = load_evaluator("embedding_distance", embeddings=embedding_model)\n```\n\n> **API Reference:**\n> - [HuggingFaceEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.huggingface.HuggingFaceEmbeddings.html)\n\n```python\nhf_evaluator.evaluate_strings(prediction="I shall go", reference="I shan\'t go")\n```\n\n```python\nhf_evaluator.evaluate_strings(prediction="I shall go", reference="I will go")\n```\n\n[](None)_1. Note: When it comes to semantic similarity, this often gives better results than older string distance metrics (such as those in the [StringDistanceEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.StringDistanceEvalChain.html#langchain.evaluation.string_distance.base.StringDistanceEvalChain)), though it tends to be less reliable than evaluators that use the LLM directly (such as the [QAEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html#langchain.evaluation.qa.eval_chain.QAEvalChain) or [LabeledCriteriaEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.LabeledCriteriaEvalChain.html#langchain.evaluation.criteria.eval_chain.LabeledCriteriaEvalChain)) _', metadata={'code_snippet': True, 'completness': 'Very', 'contains_markdown_table': True, 'description': True, 'language': 'en', 'source': 'https://python.langchain.com/docs/guides/evaluation/string/embedding_distance', 'talks_about_chain': True, 'talks_about_expression_language': True, 'talks_about_retriever': True, 'talks_about_vectorstore': True, 'title': 'Embedding Distance | 🦜️🔗 Langchain'})]