-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OLS-1379: Add oc tools #2216
base: main
Are you sure you want to change the base?
OLS-1379: Add oc tools #2216
Conversation
@onmete: This pull request references OLS-1379 which is a valid jira issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@asamal4 this should be easy to integrate once you have your PR ready. |
/hold |
ols/src/tools/oc_cli.py
Outdated
raise e | ||
|
||
|
||
def oc_get(args: list[str]) -> subprocess.CompletedProcess: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
input type: Why it is list[str] ? shouldn't it be just str ?
output type: I haven't tested, but I am thinking we can keep this as str (we can convert the output to str)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, output should be a string.
As for the input, technically it is a list of strings. From my experiments, llm had no issue returning it, but I'll try to double-check what is better understandable to the llm (list of strings/one string).
ols/src/tools/oc_cli.py
Outdated
# List one or more resources by their type and names. | ||
oc get rc/web service/frontend pods/web-pod-13je7 | ||
""" | ||
result = run_oc(["get", *sanitize_oc_args(args)]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What will be the args value ? part of the command except oc get
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, besides sanitization, we can also catch the case when llm returns not just the args, but the full command oc get whatever
instead of just whatever
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably we are relying too much on model to generate the command here. Also this will require us to handle lot of corner cases (lot of rules to sanitize).
Anyways let's see if others have any opinion on this.
/assign @asamal4 |
@onmete: This pull request references OLS-1379 which is a valid jira issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
is_final_round: bool, | ||
) -> tuple[AIMessage, TokenCounter]: | ||
"""Invoke LLM with or without tools based on conditions.""" | ||
llm = ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps nitpicking, but can we add this condition to _invoke_llm method itself. We can avoid additional lines of code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated
return {} | ||
|
||
logger.info("Introspection enabled - using default tools selection") | ||
tools_map: dict = {} # place for other/default tools |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to have this here ?
in tools.py file we are defining all tools. Default can also be added there.
This will also help us not to have separate description len check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated
We discussed about identifying different test scenarios. But for time being could you please modify e2e test case (just one). So that once introspectionEnabled property, any can simply un-comment the tool calling e2e test suites. Test any one query for which you don't have to do additional set up. |
@asamal4 About the e2e, I would rather do that in a separate PR or after we figure out how to test the tools. |
@@ -1,7 +1,7 @@ | |||
[tool.ruff] | |||
|
|||
# description of all rules are available on https://docs.astral.sh/ruff/rules/ | |||
lint.select = ["D", "E", "F", "W", "C", "S", "I", "G", "TCH", "SLOT", "RUF", "C90", "N", "YTT", "ASYNC", "A", "C4", "T10", "PGH", "FURB", "PERF", "AIR", "NPY", "FLY", "PLW2901"] | |||
lint.select = ["D", "E", "F", "W", "C", "S", "I", "G", "TCH", "SLOT", "RUF", "C90", "N", "YTT", "ASYNC", "A", "C4", "T10", "FURB", "PERF", "AIR", "NPY", "FLY", "PLW2901"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've removed the PGH from linting - when we want to ignore some error (eg. false positive) we can do that just by # type: ignore
instead of writing down specific - especially in cases when there are more errors in one line.
cc @tisnik
Separate PR is fine.. BTW I am only requesting to update the existing one e2e test case. I am not sure how much time we will take to figure out all the scenarios. But we can still enable tool calling e2e test case, once operator adds the flag (This will happen before we figure out scenarios). This is just to make sure that tool calling is working. I believe we need to make sure that things are working. And this is the reason we have added rhelai test flow with openai url. |
@asamal4 Updated the question, is this what you were after? |
Yes. Thank you. Most likely it is going to fail for granite. But we can deal with that in separate PR. |
Based on testing, we will update the tool definition. |
if char in blocked_chars: | ||
# stop processing further characters in this argument | ||
logger.warning( | ||
"Problematic character(s) found in oc tool argument '%s'", arg |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
probably we should stop processing such input as is, not try to sanitize it further
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we'll tackle this in OLS-1443
New changes are detected. LGTM label has been removed. |
@onmete: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Description
Adding oc tools.
OLS-1410
Type of change