OllamaLLM Objects
__init__
name(str): The name of the Ollama model to use.temperature(float): The temperature of the LLM.max_tokens(Optional[int]): The maximum number of tokens to generate.stream(bool): Whether to stream the response.
__call__
messages(List[Message]): The messages to pass to the OllamaLLM.temperature(Optional[float]): The temperature of the LLM.name(Optional[str]): The name of the Ollama model to use.max_tokens(Optional[int]): The maximum number of tokens to generate.stream(Optional[bool]): Whether to stream the response.

