This section provides the information that you need to start to use DKube immediately. Your access to DKube will depend upon your role. There are 2 different roles associated with DKube.
|Operator||Manage the DKube cluster, users, and resources|
|Data Scientist||Create, test, train, and deploy models & pipelines|
If the User is logged in as an Operator, the dashboards can be selected with a left-click on the User icon in the top right-hand side of the screen (circled in the screen shot). Once selected, the dashboards toggle between the Operator & Data Scientist views. The details of the dashboards are provided in the following sections.
First Time Users¶
If you want to jump directly to a guided example, go to the Data Scientist Tutorial. This steps you through the Data Scientist workflow using a simple example.
If you want to start with your own program and dataset, follow these steps.
- Load the workspaces and datasets into DKube (Section Workspaces)
- Create a Notebook (Section Create Notebook)
- Create aTraining Job (Section Create Training Job)
- Deploy or Export the Trained Model (Section Deploy Model)
Otherwise, the following sections provide the concepts for the Operator and Data Scientist roles.
If you are an Operator, you will have access to both the Operator and Data Science roles. By default, DKube enables operation without needing to do setup from the Operator. The Operator User is on-boarded and authenticated during the installation process, and this User is also enabled as a Data Scientist.
DKube Operator Concepts¶
|User||Operator or Data Scientist|
Aggregation of Users
GPU devices connected to the Node
Aggregation of GPUs
Operation of Pools¶
Pools are collections of GPUs assigned to Groups. The GPUs in the Pool are shared by the Users in the Group.
- A Pool can only contain one type of GPU
- The Users in a Group share the GPUs in the Pool
- As GPUs are used by Jobs, they reduce the number of GPUs available to other Users in the Group. Once the Job is complete (or stopped), the GPUs are available for other Jobs.
Default Pool and Group¶
DKube includes a Group and Pool with special properties, called the Default Group and Default Pool. They are both available when DKube is installed, and cannot be deleted. The Default Group and Pool allow Users to start their work as Data Scientists without needing to do a lot of setup.
- The Default Pool contains all of the GPUs that have not been allocated to another Pool by the Operator. As the GPUs are discovered and automatically on-boarded, they are placed in the Default Pool.
- As additional Pools are created, and GPUs are allocated to the new Pools, the number of GPUs in the Default Pool are reduced
- As GPUs are removed from the other (non-Default) Pools, those GPUs are allocated back into the Default Pool
- The total number of GPUs in all of the Pools will always equal the total number of GPUs across the cluster, since the Default Pool will always contain any GPU not allocated to any other Pool
- The Default Group automatically gets the allocation of the Default Pool, and it contains all of the on-boarded Users who are allocated to the Default Group.
- As new Users are on-boarded, they are assigned to the Default Group unless a different assignment is made during the on-boarding process
- Users can be moved from the Default Group to another Group using the same steps as from any other Group
Pools behave differently depending upon whether the GPUs are spread across the cluster, or on a single node. If all of the GPUs in a Pool are on a single node, no special treatment is required to operate as described above.
If the GPUs in a pool are distributed across more than a single node, the Advanced option must be selected when submitting a Job. The job must be submitted with the number of worker nodes that provide DKube with guidance about how they can should be distributed.
This is described in more detail in section on the Training Training Job Container.
Initial Operator Workflow¶
At installation time, default Pools & Groups have been created, and the Operator is added to the Default Pool
- The Default Pool contains all of the resources
- The Operator has been added to the Default Group
- The Data Scientist can start without needing to do any resource configuration
If Pools and Group are required in addition to the Default, the following steps can be followed:
- Create Additional Pools (Section Create Pool)
- Assign Devices to the Pools
- Create Additional Groups (Section Create Group)
- Assign a Pool to each new Group
- Add (On-Board) Users (Section Add (On-Board) User)
- Assign Users to one of the new Groups
- New Users can still be assigned to the Default Group if desired
If the Operator is the only User, or if all of the Users, including other Data Scientists, are working on the same project (and are in the same Group), nothing else needs to be done from the Operator workflow to get started.
- The Operator should select the Data Scientist dashboard
- The following section describes how to get started as a Data Scientist
Data Scientist Role¶
If you are a Data Scientist, you will only have access to the Data Scientist role.
- Several example models with their associated datasets and test data have been provided on GitHub. The locations are described in section Example Model and Dataset Locations
- The models and datasets can be downloaded through the Workspaces and Datasets screens, and data science can begin.
- A tutorial that takes you through your first usage is available at Data Scientist Tutorial
DKube Data Scientist Concepts¶
|Workspaces||Directory containing program code for Notebooks and Jobs|
|Datasets||Directory containing training data for Notebooks and Jobs|
|Notebooks||Experiment with different workspaces, datasets, and hyperparameters|
Formal execution of code
|Models||Trained models, ready for deployment or transfer learning|
|Inferences||Deployed model after training for testing or production|
|Pipeline||Kubeflow Pipelines - Portable, visual approach to automated deep learning|
|Experiments||Aggregation of runs|
|Runs||Single cycle through a pipeline|
The concepts of Pipelines are explained in section Kubeflow Pipelines
When a Training Job is submitted (see Jobs ), DKube will determine whether there are enough available GPUs in the Pool associated with the shared Group. If there are enough GPUs, the job will be scheduled immediately.
If there are not currently enough GPUs in the Pool, the job will be queued waiting for the GPUs to become available. As the currently running jobs are completed, their GPUs are released back into the Pool, and as soon as there are sufficient GPUs to run the queued job it will start.
It is possible to instruct the scheduler to initiate a Job immediately, without regard to how many GPUs are available. This directive is provided by the user in the GPUs section when submitting the job.
Status Field of Notebooks, Training Jobs, and Inference¶
The status field provides an indication of how the Notebook, Training Job, or Inference is progressing. The meaning of each status is provided here.
|Waiting for GPUs||Released from queue; waiting for GPUs||All|
|Starting||Resources available; job is starting||All|
|Running||Job is active||All|
|Training||Training Job is running||Training Job|
|Complete||Job is complete; resources released||All|
|Error||Job failure; clone and rerun||All|
|Stopping||Job in process of stopping||All|
|Stopped||Job stopped; resources released||All|
DKube implements Katib-based hyperparameter optimization. This enables automated tuning of hyperparameters for a Job, based upon target objectives.
This is described in more detail at Katib Introduction.
Support for Kubeflow Pipelines has been integrated into DKube. Pipelines facilitate portable, automated, structured machine learning workflows based on Docker containers.
The Kubeflow Pipelines platform consists of:
- A user interface (UI) for managing and tracking experiments and runs
- An engine for scheduling multi-step machine learning workflows
- An SDK for defining and manipulating pipelines and components
- Notebooks for interacting with the system using the SDK
An overall description of Kubeflow Pipelines is provided below. The reference documentation is available at Pipelines Reference.
A pipeline is a description of a machine learning workflow, including all of the components in the workflow and how they combine in the form of a graph. The pipeline includes the definition of the inputs (parameters) required to run the pipeline and the inputs and outputs of each component.
After developing your pipeline, you can upload and share it on the Kubeflow Pipelines UI.
The following provides a summary of the Pipelines terminology.
|Pipeline||Graphical description of the workflow|
|Component||Self-contained set of code that performs one step in the workflow|
|Graph||Pictorial representation of the run-time execution|
|Experiment||Aggregation of Runs, used to try different configurations of your pipeline|
|Run||Single execution of a pipeline|
|Recurring Run||Repeatable run of a pipeline|
|Run Trigger||Flag that tells the system when a recurring run spawns a new run|
|Step||Execution of a single component in the pipeline|
|Output Artifact||Output emitted by a pipeline component|
A pipeline component is a self-contained set of user code, packaged as a Docker image, that performs one step in the pipeline. For example, a component can be responsible for data preprocessing, data transformation, model training, etc.
The component contains:
|Client Code||The code that talks to endpoints to submit jobs|
|Runtime Code||The code that does the actual job and usually runs in the cluster|
A component specification is in YAML format, and describes the component for the Kubeflow Pipelines system. A component definition has the following parts:
|Metadata||Name, description, etc.|
|Interface||Input/output specifications (type, default values, etc)|
|Implementation||A specification of how to run the component given a set of argument values for the component’s inputs. The implementation section also describes how to get the output values from the component once the component has finished running.|
The Component specification is available at Kubeflow Component Spec.
You must package your component as a Docker image. Components represent a specific program or entry point inside a container.
Each component in a pipeline executes independently. The components do not run in the same process and cannot directly share in-memory data. You must serialize (to strings or files) all the data pieces that you pass between the components so that the data can travel over the distributed network. You must then deserialize the data for use in the downstream component.
The following screenshot shows an example of a pipeline graph, taken from one of the programs that is included as part of DKube.
The python source code that corresponds to the graph is shown here.
In order to create an experiment, a Run must be initiated. This is an example of the Details needed for a Run.
After the Run is complete, the details of the run and the outputs can be viewed. Information about the Run, including the full graph and the details of the Run, are available by selecting the Run name. The Pipeline stage provides more information from the Run details screen .