Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix APA bugs in creation, add test and demo yaml #536

Merged
merged 4 commits into from
Dec 18, 2024

Conversation

kr11
Copy link
Collaborator

@kr11 kr11 commented Dec 17, 2024

Pull Request Description

[Please provide a clear and concise description of your changes here]

This PR addresses several some issues related to APA:

  1. Bug Fix: Resolves the panic encountered during the creation of APA resources. The panic was due to unimplementation of GetUpFluctuationTolerance and GetDownFluctuationTolerance.

  2. New Configuration: Enables users to define a custom window for APA through the context attribute apa.autoscaling.aibrix.ai/window, the duration of the metrics window that APA observes.

  3. Testing: Adds APA UT.

Demo PA yaml

Demo APA yaml:

apiVersion: autoscaling.aibrix.ai/v1alpha1
kind: PodAutoscaler
metadata:
  name: metric-server-autoscaler
  namespace: kube-system
  labels:
    app.kubernetes.io/name: aibrix
    app.kubernetes.io/managed-by: kustomize
    autoscaling.aibrix.ai/up-fluctuation-tolerance: "0.1"
    autoscaling.aibrix.ai/down-fluctuation-tolerance: "0.2"
    apa.autoscaling.aibrix.ai/window: "30s"
  namespace: default
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: metrics-server
  minReplicas: 1
  maxReplicas: 4
  metricsSources:
  - metricSourceType: "pod"
    protocolType: "https"
    port: "4443"
    path: "/metrics"
    targetMetric: "go_threads"
    targetValue: "20"
  scalingStrategy: "APA"

Demo HPA yaml:

apiVersion: autoscaling.aibrix.ai/v1alpha1
kind: PodAutoscaler
metadata:
  name: metric-server-autoscaler
  namespace: kube-system
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: metrics-server
  minReplicas: 1
  maxReplicas: 4
  metricsSources:
    - metricSourceType: "pod"
      protocolType: "https"
      port: "4443"
      path: "/metrics"
      targetMetric: "go_threads"
      targetValue: "20"
  scalingStrategy: "HPA"

The created HPA description:

kubectl describe hpa -n kube-system
Name:                    metric-server-autoscaler
Namespace:               kube-system
Labels:                  <none>
Annotations:             <none>
CreationTimestamp:       Tue, 17 Dec 2024 21:27:38 +0800
Reference:               Deployment/metrics-server
Metrics:                 ( current / target )
  "go_threads" on pods:  <unknown> / 20
Min replicas:            1
Max replicas:            4
Deployment pods:         1 current / 0 desired
Conditions:
  Type           Status  Reason               Message
  ----           ------  ------               -------
  AbleToScale    True    SucceededGetScale    the HPA controller was able to get the target's current scale
  ScalingActive  False   FailedGetPodsMetric  the HPA was unable to compute the replica count: unable to get metric go_threads: unable to fetch metrics from custom metrics API: no custom metrics API (custom.metrics.k8s.io) registered
Events:
  Type     Reason                        Age   From                       Message
  ----     ------                        ----  ----                       -------
  Warning  FailedGetPodsMetric           12s   horizontal-pod-autoscaler  unable to get metric go_threads: unable to fetch metrics from custom metrics API: no custom metrics API (custom.metrics.k8s.io) registered
  Warning  FailedComputeMetricsReplicas  12s   horizontal-pod-autoscaler  invalid metrics (1 invalid out of 1), first error is: failed to get pods metric value: unable to get metric go_threads: unable to fetch metrics from custom metrics API: no custom metrics API (custom.metrics.k8s.io) registered



Contribution Guidelines (Expand for Details)

We appreciate your contribution to aibrix! To ensure a smooth review process and maintain high code quality, please adhere to the following guidelines:

Pull Request Title Format

Your PR title should start with one of these prefixes to indicate the nature of the change:

  • [Bug]: Corrections to existing functionality
  • [CI]: Changes to build process or CI pipeline
  • [Docs]: Updates or additions to documentation
  • [API]: Modifications to aibrix's API or interface
  • [CLI]: Changes or additions to the Command Line Interface
  • [Misc]: For changes not covered above (use sparingly)

Note: For changes spanning multiple categories, use multiple prefixes in order of importance.

Submission Checklist

  • PR title includes appropriate prefix(es)
  • Changes are clearly explained in the PR description
  • New and existing tests pass successfully
  • Code adheres to project style and best practices
  • Documentation updated to reflect changes (if applicable)
  • Thorough testing completed, no regressions introduced

By submitting this PR, you confirm that you've read these guidelines and your changes align with the project's contribution standards.

@kr11 kr11 force-pushed the kangrong/fix/apa_creation branch from 0e559bf to 7018f25 Compare December 18, 2024 02:07
@Jeffwan
Copy link
Collaborator

Jeffwan commented Dec 18, 2024

/lgtm
rerun the test and it pass. we can merge the issue now.

@Jeffwan Jeffwan merged commit 0e520fc into main Dec 18, 2024
10 checks passed
@Jeffwan Jeffwan deleted the kangrong/fix/apa_creation branch December 18, 2024 05:28
gangmuk pushed a commit that referenced this pull request Jan 25, 2025
* add APA param `window`. Add test, implement apa.get_fluctuation

* fix lint

* fix duplicate klog init

* revert mistaken commit
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants