To use IBM Watsonx.ai, you need to set up your credentials and project ID.
Copy
# Initialize credentials - replace with your own API key# Best practice: Store API keys in environment variables# Ensure WATSONX_API_KEY, WATSONX_URL, and WATSONX_PROJECT_ID are set in your .env file or environmentos.environ["WATSONX_API_KEY"] = os.getenv("WATSONX_API_KEY", "your_watsonx_api_key_here")os.environ["WATSONX_URL"] = os.getenv("WATSONX_URL", "https://eu-de.ml.cloud.ibm.com") # Example URL, ensure it's correct for your regionos.environ["WATSONX_PROJECT_ID"] = os.getenv("WATSONX_PROJECT_ID", "your-project-id-here")credentials = Credentials( url=os.environ["WATSONX_URL"], api_key=os.environ["WATSONX_API_KEY"],)# Project ID for your IBM Watsonx projectproject_id = os.environ["WATSONX_PROJECT_ID"]
Let’s use IBM Watsonx.ai to generate text based on a prompt:
Copy
# Initialize text generation modelgen_model = ModelInference(model_id="google/flan-ul2", credentials=credentials, project_id=project_id)# Generate text with a promptprompt = "Write a short poem about artificial intelligence:"response = gen_model.generate_text(prompt)print(f"Generated Text:\n{response}")
Now, let’s use a different model for chat completion:
Copy
# Initialize chat modelchat_model = ModelInference( model_id="meta-llama/llama-3-8b-instruct", # Using the model ID from the MDX as it might be more current/available credentials=credentials, project_id=project_id)# Format messages for chatmessages = [ {"role": "system", "content": "You are a helpful AI assistant."}, {"role": "user", "content": "What are the three laws of robotics?"},]# Get chat responsechat_response = chat_model.chat(messages)# Accessing response based on typical ibm-watsonx-ai SDK structureprint(f"Chat Response:\n{chat_response['results'][0]['generated_text']}")
# New chat messagesmessages = [ {"role": "system", "content": "You are an expert in machine learning."}, {"role": "user", "content": "Explain the difference between supervised and unsupervised learning in simple terms."},]# Get chat responsechat_response = chat_model.chat(messages)print(f"Chat Response:\n{chat_response['results'][0]['generated_text']}")
Finally, let’s close the persistent connection with the models if they were established and end the AgentOps session.
Copy
# Close connections if persistent connections were used.# This is good practice if the SDK version/usage implies persistent connections.try: gen_model.close_persistent_connection() chat_model.close_persistent_connection()except AttributeError: # Handle cases where this method might not exist (e.g. newer SDK versions or stateless calls) print("Note: close_persistent_connection not available or needed for one or more models.") passagentops.end_session("Success") # Manually end session
Assistant
Responses are generated using AI and may contain mistakes.