mlflow
Use the fake OpenAI service for testing llama index flavor
#12619
Merged

Use the fake OpenAI service for testing llama index flavor #12619

B-Step62
B-Step62311 days ago (edited 311 days ago)
馃洜 DevTools 馃洜

Open in GitHub Codespaces

Install mlflow from this PR

pip install git+https://github.com/mlflow/mlflow.git@refs/pull/12619/merge

Checkout with GitHub CLI

gh pr checkout 12619

What changes are proposed in this pull request?

Use the fake OpenAI service in llama index test instead of mock LLM/embedding classes, so we can more reliably test the integration.

How is this PR tested?

  • Existing unit/integration tests
  • New unit/integration tests
  • Manual tests

Does this PR require documentation update?

  • No. You can skip the rest of this section.
  • Yes. I've updated:
    • Examples
    • API references
    • Instructions

Release Notes

Is this a user-facing change?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release notes for MLflow users.

What component(s), interfaces, languages, and integrations does this PR affect?

Components

  • area/artifacts: Artifact stores and artifact logging
  • area/build: Build and test infrastructure for MLflow
  • area/deployments: MLflow Deployments client APIs, server, and third-party Deployments integrations
  • area/docs: MLflow documentation pages
  • area/examples: Example code
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/recipes: Recipes, Recipe APIs, Recipe configs, Recipe Templates
  • area/projects: MLproject format, project running backends
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/server-infra: MLflow Tracking server backend
  • area/tracking: Tracking Service, tracking client APIs, autologging

Interface

  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/docker: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
  • area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry
  • area/windows: Windows support

Language

  • language/r: R APIs and clients
  • language/java: Java APIs and clients
  • language/new: Proposals for new client languages

Integrations

  • integrations/azure: Azure and Azure ML integrations
  • integrations/sagemaker: SageMaker integrations
  • integrations/databricks: Databricks integrations

How should the PR be classified in the release notes? Choose one:

  • rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
  • rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
  • rn/feature - A new user-facing feature worth mentioning in the release notes
  • rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
  • rn/documentation - A user-facing documentation change worth mentioning in the release notes

Should this PR be included in the next patch release?

Yes should be selected for bug fixes, documentation updates, and other small changes. No should be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.

What is a minor/patch release?
  • Minor release: a release that increments the second part of the version number (e.g., 1.2.0 -> 1.3.0).
    Bug fixes, doc updates and new features usually go into minor releases.
  • Patch release: a release that increments the third part of the version number (e.g., 1.2.0 -> 1.2.1).
    Bug fixes and doc updates usually go into patch releases.
  • Yes (this PR will be cherry-picked and included in the next patch release)
  • No (this PR will be included in the next minor release)
github-actions
github-actions311 days ago (edited 310 days ago)

Documentation preview for 116eec1 will be available when this CircleCI job
completes successfully.

More info
github-actions github-actions added area/models
github-actions github-actions added rn/none
B-Step62 B-Step62 requested a review from mlflow-automation mlflow-automation 311 days ago
github-actions github-actions requested a review from BenWilson2 BenWilson2 311 days ago
github-actions github-actions requested a review from daniellok-db daniellok-db 311 days ago
github-actions github-actions requested a review from harupy harupy 311 days ago
github-actions github-actions requested a review from serena-ruan serena-ruan 311 days ago
github-actions github-actions requested a review from WeichenXu123 WeichenXu123 311 days ago
github-actions github-actions requested a review from xq-yin xq-yin 311 days ago
github-actions github-actions removed review request from mlflow-automation mlflow-automation 311 days ago
B-Step62 B-Step62 requested a review from michael-berk michael-berk 311 days ago
B-Step62 B-Step62 requested a review from mlflow-automation mlflow-automation 311 days ago
github-actions github-actions removed review request from mlflow-automation mlflow-automation 311 days ago
harupy
harupy commented on 2024-07-10
Conversation is marked as resolved
Show resolved
tests/helper_functions.py
harupy310 days ago (edited 310 days ago)

can we make this change on master, sync the llama-index branch, and then updates files under tests/llama_index/conftest.py to minimize conflicts?

B-Step62310 days ago

Makes sense, lemme file a separate one!

B-Step62310 days ago

Filed! #12622

harupy
harupy commented on 2024-07-10
Conversation is marked as resolved
Show resolved
tests/helper_functions.py
707 str(port),
708 ]
709 ) as proc:
710
base_url = f"http://localhost:{port}"
711
for _ in range(3):
712
try:
713
resp = requests.get(f"{base_url}/health")
714
except requests.ConnectionError:
715
time.sleep(1)
716
continue
717
if resp.ok:
718
break
719
else:
720
raise RuntimeError("Failed to start mock OpenAI server")
721
722
yield base_url
723
proc.kill()
harupy310 days ago馃憤 1
Suggested change
base_url = f"http://localhost:{port}"
for _ in range(3):
try:
resp = requests.get(f"{base_url}/health")
except requests.ConnectionError:
time.sleep(1)
continue
if resp.ok:
break
else:
raise RuntimeError("Failed to start mock OpenAI server")
yield base_url
proc.kill()
try:
base_url = f"http://localhost:{port}"
for _ in range(3):
try:
resp = requests.get(f"{base_url}/health")
except requests.ConnectionError:
time.sleep(1)
continue
if resp.ok:
break
else:
raise RuntimeError("Failed to start mock OpenAI server")
yield base_url
finally:
proc.kill()

to ensure the server is always killed.

B-Step62 Use fake OpenAI server for llama index tests
116eec17
B-Step62 B-Step62 force pushed from 16cf453f to 116eec17 310 days ago
B-Step62 B-Step62 requested a review from harupy harupy 310 days ago
harupy
harupy approved these changes on 2024-07-10
harupy310 days ago馃憤 1

LGTM!

B-Step62 B-Step62 merged e636065c into llama-index 310 days ago
B-Step62 B-Step62 deleted the llama-index-mock-openai branch 310 days ago

Login to write a write a comment.

Login via GitHub

Assignees
No one assigned
Labels
Milestone