#
SDK
The Purple Fabric’s SDK empowers the users, developers and subject-matter experts to build, integrate, and extend agentic AI solutions within the Purple Fabric ecosystem. It brings the platform’s powerful building blocks directly into your applications.
Overview Design checklist SDK Lifecycle Connector Build Journey Using SDK Connector Deployment Journey
#
Overview
The Connector module enables seamless integration between third-party tools and the Platform. This integration pattern facilitates data exchange from tools like Google Drive (GDrive), Amazon S3, Notion, and more. The Platform provides a suite of pre-built (Out-of-the-Box or OOTB) connectors, as well as the flexibility for clients to build custom connectors. Developers can leverage the Software Development Kit (SDK) provided by the Platform to create these connectors. The design documentation will guide developers through the SDK lifecycle, connector creation journey, and deployment options.
We currently offer support for two types of Connector journeys, as outlined in below table:
#
Design Checklist
#
SDK Lifecycle
The SDK lifecycle defines the process of developing, testing, building, and publishing new versions of the SDK. The SDK provides developers with the necessary tools and libraries to create custom connectors, ensuring consistency and ease of use.
Note:
The connector SDK currently supports only Python, with plans to expand support to other languages in the future.
Below is the typical Python SDK development flow:
SDK - Build
Developers write and organize the SDK code, which is stored in version control (i.e., iGit repositories). The SDK code is designed to be flexible, so developers can use it to create connectors for various third-party toolsSDK - Test
Once the code is built, it undergoes rigorous testing to ensure its functionality and integration capabilities. This phase involves verifying that the SDK interacts as expected with external services and handles data correctlySDK - Publish
After passing testing, the SDK is packaged and published. The new SDK version is made available for integration with the platform and for client use. A PyPi account is created for the Platform (PF/IntellectAI) to distribute the SDK.Jenkins Integration
Jenkins plays a key role in automating the SDK publishing process. Once the SDK passes the build and test phases, Jenkins automates the publishing of the SDK package to PyPi, where it can be downloaded and integrated into new connectors.Connector Deployment
Once the SDK is available, developers can use it to build custom connectors. After development, these connectors are deployed through various pipeline options, which are discussed in the next section.
#
Connector Build Journey Using SDK
The Platform provides an SDK with a Command Line Interface (CLI) to facilitate the development of custom connectors. Using the SDK, developers can create, build, test, and publish connectors that integrate third-party services (e.g., Google Drive, Amazon S3, Notion) with the Platform.
This section outlines the step-by-step process for creating a connector using the SDK and the CLI.
Note:
- OOTB Connectors are built, published, and maintained by the PF team
- Developers must have user access to the Platform
- The CLI supports additional commands such as help, nfr-report, and deploy
Step 1: Consult the Documentation & Download the SDK
Developers begin by referring to the Connector Documentation, which provides guidelines for building connectors. The SDK is available on PyPi, and developers can install it using:
pip install platform-sdk
Step 2: Create a New Connector Project
To create a new connector, developers use the CLI to generate a project structure.
pf-cli create [OPTIONS]
Actions Performed:
- A new connector project is initialized
- The required folder structure and configuration files are generated
- The developer can then define the schema for data ingestion and processing
Step 3: Define the Schema & Implement Code Logic
Once the project is created, developers:
1. Prepare the Schema: Define the data model, API endpoints, and authentication requirements
2. Generate the Code Logic: Implement the connector logic, handling API requests and data transformations
3. Test Locally: Before deployment, the connector should be tested locally
Step 4: Build the Connector
Once development is complete, the next step is to build the connector:
pf-cli build [OPTIONS]
Actions Performed:
- The CLI validates the connector code and dependencies
- A build artifact is generated
Step 5: Publish the Connector
To deploy the connector to the Platform, developers use:
pf-cli deploy [OPTIONS]
Actions Performed:
The connector is uploaded to Amazon S3, where its configuration (config.json) is stored. Following is the structure in our storage repository:
Bucket Name/
└── Asset Name/
└── Version/
├── code.zip
├── config.json
└── nfr_report.jsonA message is sent to Amazon SQS, triggering the connector registration process.
Step 6: Deployment & Integration
After publishing, the connector is available for use within the Platform. Developers can configure and monitor it through the UI.
#
Connector Deployment Journey
Note: The default SDK option currently utilizes Docker and generates a Docker image as part of the deployment pipeline. A proof of concept (POC) is needed to evaluate the deployment strategy (Docker vs Lambda vs Runtime). The results of this POC will guide the redefinition of the available deployment options.
The deployment of a connector follows a structured pipeline that ensures security, compliance, and reliability. The process involves Jenkins, Amazon S3, Docker Hub, Kubernetes (K8s) Helm Charts, and Amazon SQS for messaging.
This section details how a connector is published and deployed, including Non-Functional Requirement (NFR) validation and failure handling mechanisms.
Step 1: Initiating Deployment
- A message is sent to Amazon SQS to trigger the deployment process
- Jenkins picks up the job and starts the deployment workflow
Step 2: Code Download & NFR Validation
Jenkins downloads the connector code from the platform's internal storage (currently it is S3).
Bucket Name/
└── Asset Name/
└── Version/
├── code.zip
├── config.json
└── nfr_report.jsonIt runs the NFR validation process, which currently includes a Twistlock scan to check for security vulnerabilities.
Step 3: NFR Validation Check
- The deployment pipeline executes Twistlock scans
- If NFR fails, the deployment is halted, and:
- The NFR report (JSON) is stored in the S3 bucket
- A failure message is sent to Amazon SQS, which triggers Builder SVC to update the connector’s status as Failed
- If NFR passes, the report would be stored in S3 bucket and the deployment proceeds
- The user can run the NFR report through the CLI to retrieve the NFR metrics for the corresponding connector
Step 4: Building & Publishing the Connector
- Jenkins builds and publishes a Docker image of the connector to Docker Hub
- The Kubernetes Helm Charts are prepared for deployment
Step 5: Deploying the Connector
- The connector’s POD is deployed in Kubernetes
- The system waits for the POD to reach the "POD ready" status
Step 6: Deployment Status Update
- If deployment fails, a failure message is sent to Amazon SQS, and the Builder SVC marks the status as Failed
- If deployment succeeds, the system:
- Sends a success message to Amazon SQS
- The Builder SVC updates the Asset status to "Published" and activates the connector
Tools & Technologies Used
Failure Handling & Logging
- NFR Failure: The deployment stops, and failure details are logged in S3 and sent to SQS
- Deployment Failure: The connector status is updated as Failed in Builder SVC
Final Outcome
Once successfully deployed, the connector is available for use within the Platform. If any failures occur, logs and reports are available for debugging.
Refer to the following pages to know more:
- What Purple Fabric’s SDK can do for you?
- Powers of the CLI
- SDK - Getting Started
- The schema.json file
- Examples of Production-ready Connectors
- Frequently Asked Questions (FAQs)