• Skip to main content
Help Center

Help Center

Uncategorized

Contextual Extraction


An enterprise platform powered by Gen AI with automated extraction capabilities streamlines document handling for users. Utilizing Artificial Intelligence, the platform automatically identifies and extracts relevant data from various sources, reducing manual efforts and minimizing errors. This enables business users to perform easy and efficient extraction, saving time, improving data accuracy, and boosting overall productivity.

Users must have the Gen AI User policy to access the extraction capability. 

This guide will walk you through the steps on how to create an Extraction Agent.

  1. Create an asset
  2. Select a prompt template
  3. Select a model and set model configurations
  4. Provide the system instructions, parameters, output schema and examples
  5. Run the model and view results
  6. Publish the asset

Step 1: Create an asset

  1. Head to the Asset Studio module and click Create new and choose Generative AI.



  2. In the Generative AI window that appears, enter a unique Asset name, for example, “NACH Mandate Extractor” to easily identify it within the platform.



  3. Optional: Enter a brief description and upload an image to provide additional context or information about your Asset.
  4. In Type, choose the Automation Agent and click Create.
  5. In Asset Visibility, choose any one of the following options.
    • Private (default): Choose this option to ensure that only you, the owner, can view and manage the asset.  
    • All Users : Choose this option to share the asset with everyone in the workspace who has the appropriate permissions to view and manage the asset.
  6. Click Create to start the Extractor Asset creation.

Step 2: Select a prompt template

  1. On the Generative AI Asset creation page that appears, choose Default Prompt template.



Step 3: Select a model and set model configurations

Select a Model

  1. Select a model from the available List, considering model size, capability, and performance. For more information about the model, see Model Capability Matrix.


Set Model Configuration

  1. Click and then set the following tuning parameters to optimize the model’s performance.  For more information, see Advance Configuration.

Step 4: Provide the system instructions, parameters, output schema and examples

Provide System Instructions 

A system instruction refers to a command or directive provided to the model to modify its behavior or output in a specific way. For example, a system instruction might instruct the model to summarize a given text, answer a question in a specific format, or generate content with a particular tone or style.

  1. Enter the system instructions by crafting a prompt that guides the agent in extracting the data. 



Add Parameters 

  1. In the Parameter section, click Add.



  2. Enter the following information.



  3. Click against the input parameter to access settings and add input field settings.
  4. Choose the required file formats (PDF, JPEG, JPG, TIFF, PNG) from the drop-down menu.

    undefined

  5. Select a chunk strategy for file inputs. The chunking strategy can be applied in Page, Words, and Block.



  6. Choose any one of the following for Context Handling.
    • Combined Context: Choose this option to process multiple files together as one, creating a unified context. This method provides a comprehensive view by considering the collective content.
    • Individual Contexts: Choose this option to process individual contexts involving treating each file separately. This approach maintains the unique context of each file, allowing for detailed and isolated analysis.

  7. Click Save to proceed.

Define Output Schema

  1. In the Output section, click Add to define the output schema for the Asset.
  2. Enter the Variable Name, Type and Description for each of the output variables. Supported types include Text, number, Boolean, DateTime.

    undefined

Provide Examples

Examples help the summarization task at hand to enhance the agent’s understanding and response accuracy. These examples help the agent learn and improve over time.

  1. In the Examples section, click Add. 



  2. Provide the Context and Answer in the example section.



Step 5: Run the model and view results

  1. In the Debug and preview section, browse and add the required document.



  2. Upload the files based on the following format:
    • File Upload Specification (Single File): The maximum size for a single file upload is 19 MB.
    • File Upload Specification (Multiple File):
      • File Upload Limit: Users can upload up to 10 files at a time.
      • File Size Limit: The maximum size for each uploaded file is 11 MB. Users can upload up to a total combined size of 110 MB across 10 files.
    • Supported File Formats: The system validates each file based on pre-defined accepted formats. Supported formats include: pdf, tiff, png, jpeg, jpg, doc, docx.

  3. Click Run to get the results for extraction in the required format. 



  4. Review the generated output. Verify the extraction by checking the respective information for the output.



  5. You can also view the JSON Output



  6. Click Reference  to view additional information or context about the extraction results, such as the source data, detailed explanations, and relevant metadata and select the respective References to view its information.



Note: If you are not satisfied with the results then, try modifying the System Instructions and the description of the output variables. You can also try changing to a different model

Step 6: Publish the asset

  1. Click Publish if the desired accuracy and performance for summarizing the content has been achieved.



     
  2. In the Asset Details page that appears, write a description and upload an image for a visual representation.



  3. Click Publish and the status of the Asset changes to Published then it can be accessed in the Asset Studio.


Note: Once the Asset is published, you can download the API and its documentation. The API can be consumed independently or used within a specific Use case. If you wish to consume this Asset via API, see Consume an Asset via API.

You can also consume this automation Asset in the Asset Monitor module. For more information, see Consume an Asset via Create Transaction.

Contextual Classification


The Purple Fabric Platform powered by Gen AI with automated classification capabilities streamlines the process of organizing and categorizing vast amounts of data for users. The Platform utilizes artificial intelligence to automatically classify data into predefined categories or groups based on its content, context, and attributes. By eliminating the need for manual sorting and classification, Purple Fabric platform enables business users to perform easy and efficient data classification. This enhances data management, improves data accuracy, and accelerates decision-making processes. Additionally, the platform’s intelligent algorithms can adapt to new data patterns and categories, ensuring flexibility and scalability to meet evolving business requirements.

Users must have the Gen AI User policy to access the classification capability. 

This guide will walk you through the steps on how to create a Classification Asset.

  1. Create an asset
  2. Select a prompt template
  3. Select a model and set model configurations
  4. Provide the system instruction, parameters, output schema and examples
  5. Run the model and view results
  6. Publish the asset

Step 1: Create an asset

  1. Head to the Asset Studio module and click Create Asset then choose Generative AI.




  2. In the Create Gen AI asset window that appears, enter a unique Asset name, for example, “Document_Classifier” to easily identify it within the platform.



  3. Optional: Enter a brief description and upload an image to provide additional context or information about your Asset.
  4. In Type, choose the Automation Agent and click Create.

Step 2: Select a prompt template

  1. On the Generative AI Asset creation page that appears, choose Default Prompt template.



Step 3: Select a model and set model configurations

Select a Model

  1. Select a model from the available List, considering model size, capability, and performance. For more information about the model, see Model Capability Matrix.


Set Model Configuration

  1. Click and then set the following tuning parameters to optimize the model’s performance.  For more information, see Advance Configuration.

Step 4: Provide the system instructions, parameters, output schema and examples

Provide System Instructions 

A system instruction refers to a command or directive provided to the model to modify its behavior or output in a specific way. For example, a system instruction might instruct the model to summarize a given text, answer a question in a specific format, or generate content with a particular tone or style.

  1. Enter the system instructions by crafting a prompt that guides the agent in classifying the data.  



Add Parameters 

  1. In the Parameter section, click Add.



  2. Enter the following information.



    • Name: Enter the Name of the input parameter.
    • Type:  Choose File as the data type.
    • Description: Enter the Description for each of the input parameters. The description of the parameters ensures accurate interpretation and execution of tasks by the Gen AI Asset. Be as specific as possible.
  3. Click against the parameter to access settings and add input field settings.
  4. Choose the required file formats (PDF, JPEG, JPG) from the drop-down menu.



  5. Select a chunking strategy for file inputs. The chunking strategy can be applied by Page, Words, or Block



  6. Click Save to proceed.

Define Output Schema

  1. In the Output section, click Add to define the output schema for the Asset.



  2. Enter the following information.



  3. In Variable Name, provide the names of the classes you wish to classify the data into.
  4. In Type, select from any one of the following types.
    • Text 
    • Number
    • Boolean
  5. In Description, enter the description for the parameter.

Note: The description of the parameters ensures accurate interpretation and execution of tasks by the Generative AI asset.

Provide Examples

Examples help the classification task at hand to enhance the agent’s understanding and response accuracy. These examples help the agent learn and improve over time.

  1. In the Examples section, click Add.



  2. Provide the Context and the Answer in the example section.



Step 5: Run the model and view results 

  1. n the Debug and preview section, browse and add the required document.



     
  2. Upload the files based on the following format:
    • File Upload Limit: Users can upload up to 10 files at a time.
    • File Size Limit: The maximum size for each uploaded file is 11 MB. Users can upload up to a total combined size of 110 MB across 10 files.
    • Supported File Formats: The system validates each file based on pre-defined accepted formats.Supported formats include: pdf, tiff, png, jpeg, jpg, doc, docx.
  3. Click Run to get the results for classification in the required format. 



  4. Review the generated output. Verify the classification by checking if the class is marked as true (indicating the data is classified as that class). If marked as false, the data is not classified as that class.



  5. You can also view the JSON output.



  6. Click Reference  to view additional information or context about the classification results, such as the source data, detailed explanations, and relevant metadata and select the respective References to view its information.



Note: If you are not satisfied with the results then, try modifying the System Instructions and the description of the output variables. You can also try changing to a different model.

Step 6: Publish the asset

  1. Click Publish if the desired accuracy and performance for classifying the data has been achieved. 



     
  2. Optional: In the Asset Details page that appears, write a description and upload an image for a visual representation.



  3. Click Publish and the status of the Asset changes to Published then it can be accessed in the Asset Studio.


Note: Once the Asset is published, you can download the API and its documentation. The API can be consumed independently or used within a specific Use case. If you wish to consume this Asset via API, see Consume an Asset via API.

You can also consume this automation Asset in the Asset Monitor module. For more information, see Consume an Asset via Create Transaction.

Create Tools


Tools refer to the various components, including both custom-built solutions and third-party APIs, that can be customized and integrated to perform specific tasks or functions within a Gen AI Asset/Agent. These tools are essential for enhancing the capabilities of the Gen AI platform and enabling it to address diverse business needs effectively.

Users must have a Gen AI User policy to create and manage the tools. 

The Platform allows you to create the following types of tools:

  • Custom tool
  • API tool

Create custom tool

This is a component or module specifically developed by the enterprise to fulfill unique requirements or address specific challenges. By customizing tools, businesses can fine-tune their AI solutions to align closely with their objectives and tailor them to their operational needs.

These tools enhance the efficiency and effectiveness of AI-powered workflows, enabling enterprises to automate repetitive tasks, improve decision-making processes, and enhance customer experiences.

For example, a custom tool could be developed to automatically find the account holder with the highest saving amount. 

  1. Head to Asset Studio module, choose Tools.



  2. In the Tools section, click Create Tool.



  3. In the Create Tool asset window that appears, enter a unique Name and Description.



  4. In Type, choose Custom option and then click Create. 
  5. On the Custom tool page that appears, write the Python code for the tool that you wish to create, and then click Test.



  6. If you are satisfied with the result, click Submit to create the custom tool. 

Note: You can test with static inputs and validate the response. Incase you want to pass dynamic input while integrating the custom tool as an agent, please define the inputs as dynamic variables. You may receive an error if you are using dynamic variables in the code. This doesn’t necessarily mean the code is not functioning properly, as the custom tool will be integrated with the LLM models and will fetch the inputs from them.

Create API tool

APIs play a crucial role in integrating third-party services or functionalities into the Gen AI platform. Leveraging APIs allows enterprises to access state-of-the-art AI capabilities without needing to build everything from scratch, accelerating development timelines and reducing costs.

For example, you could leverage an API that allows the user to retrieve and update information from internal databases or customer relationship management (CRM) systems.  

  1. Head to Asset Studio module, choose Tools.



  2. In the Tools section, click Create Tool.



  3. In the Create Tool asset window that appears, enter a unique Name and Description.



  4. Click Create.  
  5. In the Tool page that appears, choose any one of the following methods and enter the URL. 
    • GET : Choose this method if you wish to obtain information from the database or from a process. 
    • POST: Choose this method if you wish to send information, either to add information to a database or pass the input of an Asset. 
    • PUT: Choose this method if you wish to manage/update the information in the database. 
    • DELETE: Choose this method if you wish to Delete the information from the database.

  1. Click Test to check the response. 
  2. You can also enter the Headers, Body and Parameters information from the respective tabs.

Add Headers

  1. In the Headers tab, enter the Key, Value, and Description information.



  2. You can also use the Constant/Variable field against the Key values.
    • Constant field : Activate this option to restrict the users to change the Key values while accessing the API. 
    • Variable field: Activate this option to allow the users to change the Key values while accessing the API.
  3. Click    if you wish to add more fields. 
  4. Select the respective check boxes against the Key, Value, Description information that you wish to process for this API call.   

Note: If you do not select the checkboxes, the keys will not be processed during the API call. 

Add Body 

  1. In the Body tab,  use any one of the following options.
    • None: Choose this option if you wish to process the API without body information.
    • JSON: Choose this option if you wish to process the API with the JSON code. 
    • Form-data: Choose this option if you wish to process the API with the Key, Type, Value and Description information.



      • You can use the following options against the Type.
        • Text: Choose this option if you wish to update the Value information as text format. 
          • You can use the Constant/Variable field against the Type.
            • Activate Constant field to restrict the users to change the Key values while accessing the API. 
            • Activate Variable field to allow the users to change the Key values while accessing the API.
        • File: Choose this option if you wish to update the Value information as File

      • Click if you wish to add more fields. 
      • Select the respective check boxes against the Key, Type,  Value, Description information that you wish to process for this API call. 

Note: If you do not select the checkboxes, the keys will not be processed during the API call. 

Add Parameter

  1. In the Parameter tab, enter the Key, Value information.



  2. You can also use the Constant/Variable field against the Key values.
    • Activate Constant field to restrict the users to change the Key values while accessing the API. 
    • Activate Variable field to allow the users to change the Key values while accessing the API.
    • Click if you wish to add more fields. 
    • Finally, select the respective check boxes against the Key, Value, Description information. 
  3. Select the checkbox against the keys that you wish to process for this API call. 

Note: If you do not select the checkboxes, the keys will not be processed during the API call.

  1. Click Submit to create an API tool. 

Create a Knowledge Base


A Knowledge Base refers to an advanced, centralized enterprise-specific repository of information that is not only structured for human comprehension but is also optimized for machine understanding. It leverages artificial intelligence to draw inferences, and provide more contextually relevant and personalized responses to user queries.

Enterprise-specific knowledge bases play a pivotal role in GEN AI solutions for enterprises by offering tailored insights and solutions that are highly relevant to the organization’s unique context and requirements. 

For example, consider a business user tasked with analyzing portfolio performance for a group of clients. Instead of manually sifting through numerous documents or databases, the business user interacts with the firm’s AI-powered Agent. They input a query such as, “Retrieve recent portfolio returns for clients in the XYZ investment group.” Integrated with the Knowledge Base, the system comprehends the user’s request and swiftly retrieves the relevant portfolio performance data for clients within the specified investment group. 

Users must have the Gen AI User policy to create Knowledge Base.

This guide will walk you through the steps on how to create a Knowledge Base.

  1. Upload documents
  2. Initiate knowledge base creation
  3. Import documents 
  4. Configure a chunking strategy
  5. Choose an embedding model
  6. Metadata tagging for chunks
  7. Experiment/test the knowledge base
  8. Publish the knowledge base

Step 1: Upload documents 

It is recommended to upload the required documents in the Document Library to avail the documents during Knowledge Base creation. For more information on how to upload documents, see Upload documents. 

Note: Skip this step if you have already uploaded the required documents in the Document Library.

Step 2: Initiate knowledge base creation

  1. Head to Asset Studio, choose Knowledge hub.



  2. In the Knowledge hub section, click Create Knowledge.



  3. In the Create Knowledge base window,  enter the unique Knowledge base Knowledage Base Name and the Description.



  4. Click Create to initiate the creation of the Knowledge Base

Step 3: Import documents

  1. On the Knowledge Base creation page, click and select Import documents.  



  2. In the Documents window that appears, select the required documents.



Filter

  1. Click and choose the appropriate filters to view the documents you are searching for and then click Apply.



    • You can view the applicable results based on the chosen filters.

  2. Select the required documents and click X (close) to import the documents.


Step 4: Configure a chunking strategy

Document chunking refers to breaking down large documents or data sets into smaller, more manageable chunks for processing. Document chunking is a technique that improves the performance and cost efficiency of Gen AI platforms by allowing parallel processing, resource optimization, scalability, fault tolerance, and cost-effective operation.

The chunking strategy helps the LLM model for better retrieval, fast processing and better understanding.

  1. On the Import image message that appears, click Chunk Now.



  • Alternate: In the menu tab, click Chunk viewer and then select Configuration.
  1. In the Configuration window that appears, choose any one of the following chunk strategies.



    • Block: Choose this option if you wish to chunk the documents by blocks. Suitable for documents with diverse sections or topics where each block may represent a distinct segment requiring individual processing.

    • Page: Choose this option if you wish to chunk the documents by pages. Appropriate for documents with consistent and uniform content, where dividing by page ensures even distribution and manageable sections.

    • Word: Choose this option if you wish to chunk the documents by words. Beneficial for content where word-level context is paramount.

      • Set word limit: Ensuring that the model processes a specific number of words for accuracy and coherence.



Note: You can also choose an embedding model in this step. For more information see choose an embedding model.

  1. Click Update changes.
  2. You can now get the chunks for each document that you have selected.



Step 5: Choose an embedding model

A vector embedding model is like a special tool that helps a system understand the meaning of words and sentences in a simpler way. It works by turning words and sentences into numbers, which the computer can easily work with.

The embedding model converts the Chunks into Numerical vector representation of the actual content that is understandable by LLMs to generate output.

  1. After chunking, click RAG Viewer and then click  Configuration.



  • Alternate: In the menu tab, click Chunk viewer and then select Configuration.
  1. In the configuration window that appears, enable the Embedding option.



  1. Choose any one of the following embedding models to convert the Chunks into Numeric vector representation.
    • Azure OpenAI Text Ada 002
    • BGE Large
    • Azure OpenAI Text Embedding 3 Small
  2. Click Update changes to initiate embedding. 
  3. You can identify the embedded chunks with an Embedded label.

Step 6: Metadata and tagging for chunks

Add metadata

Metadata is a systematic way to communicate information about content. It is significant because it facilitates the discovery, usage, and preservation of that content by establishing a consistent mechanism and terminology.

Metadata for documents could include information such as title, author, publication date, journal or conference name, abstract, keywords, citations, and more.

  1. In the Knowledge base creation page, select the required chunks that you wish to add metadata.



  2. Select Add metadata option that appears against the selected chunk(s).



  3. In the Metadata window that appears, enter the Name and Value that you wish to add as a metadata for the selected chunks.



     
  4. Optional: Use to add more Metadata.

  5. Click Submit and view the added metadata details at the bottom of the chunk.



Add Tag

Tagging in document retrieval involves assigning descriptive keywords or labels to documents based on their content or user-defined categories.

Tagging helps to effectively organize and retrieve information, speeding the search procedures and improving efficiency and performance of the model

  1. In the Knowledge base creation page, select the required chunks that you wish to add tag.



  2. Select Add tag option that appears against the selected chunk(s).



  3. In the tags window that appears,  enter the tag name and press Enter then click Submit.



  4. Click to view the added tags in the bottom of the chunks.



Step 7: Experiment/test the knowledge base

  1. In the Knowledge base creation page, click RAG Viewer.



  2. In the Search bar, enter your query and press Enter. 



  3. Based on the query you entered, you can view the respective chunks of information.



  4. Click option and choose the metadata and tags that are associated with the chunks for better filtration.



  5. Click Apply to get the filtered chunks. 
  6. You can add metadata/tag against the chunks for the better retrieval and speed processing while using this Knowledge Base in other Use cases.

Step 8: Publish the knowledge base

  1. If the desired Knowledge Base has been created, click Publish.



  2. The Published Knowledge Base can be accessed in the Knowledge Hub. 



Fine-tune an Extractor Asset


Fine-tuning is the process of adjusting and optimising a trained Asset before it is published. This involves further training an Asset using an additional document set to boost its accuracy and confidence score.

To evaluate whether fine-tuning is necessary for an Asset, you can view the Accuracy Results page after Asset training is completed, which provides an overview of both correctly and incorrectly predicted entities information. By doing this, you can identify patterns or areas where fine-tuning can potentially improve the Asset’s performance. 

Note: Fine-tuning is only applicable for trained Assets before they are published, not to published Assets. If you wish to improve the performance of published Assets, you can proceed to retrain the Assets. For more information on retraining an Asset, see Retrain an Extractor Asset.

Users must have any one of the following policies to fine-tune an Extractor Asset:

  • Administrator Policy
  • Creator Policy

This guide will walk you through the steps on how to fine-tune an Extractor Asset.

  1. Consider scenarios for fine-tuning 
  2. Upload documents
  3. Initiate fine-tune
  4. Select documents
  5. Annotate and train
  6. Review results and validate
  7. Publish the asset

Step 1: Consider scenarios for fine-tuning 

The decision to fine-tune an Asset depends on your objectives, which are often fields and document specific.

  1. Head to the Asset Studio page and select the trained Asset that you wish to fine-tune.
  1. In the Accuracy Result page that appears, check the Asset’s overall accuracy rate, entity level accuracy and confidence score.



Things to know


Document type: In the context of Extractor Asset, a document type refers to the category or class that a document belongs to. For example, in an Extractor Asset, the document types can be “Invoice,” “Purchase Order,” “Receipt,” “Contract,” and more. Each of these represents a distinct category of documents. 

Entities: The entities refers to the fields and tables information.

Document Variation: Document variation refers to the different variations or instances within a specific document type. For example, various invoices could have different layouts, formats, or styles depending on factors like the vendor, company, or industry standards. 

Overall Accuracy: The overall accuracy represents the percentage of correct predictions made by the Asset across all Entities.

Entity level Accuracy: Entity level accuracy represents the percentage of correct predictions made by the Asset for individual entities in the test document set.

Confidence score: The confidence score is a measure of how confident the Asset is in its predictions for Entity information extracted from the documents.

  1. You can consider the fine-tuning the Asset in the following scenarios:
    • To improve the overall accuracy of the Asset: Consider fine-tuning the Asset when the overall accuracy of the Asset is low. 
    • To improve the entity level Accuracy: Consider fine-tuning the Asset when the accuracy of certain entities is low.
    • To improve the accuracy for specific document variations: Consider fine-tuning the Asset for specific document variations with low accuracy. For example, if you’re creating an Extractor Asset to extract entities from invoices, and you notice low accuracy or confidence scores for invoices from specific vendors or invoice in certain formats, then you can initiate fine-tuning.
    • To improve the confidence score: Consider fine-tuning the Asset when the confidence score for certain entities or document variations is low. 

Step 2: Upload documents

After identifying areas for improvement in the Asset, it is recommended to have these required document sets for fine-tuning the Asset. If you have already uploaded the documents in Document Library, skip this step and proceed to Fine-tune.

Otherwise, upload the required documents in the Document Library. For more information about uploading documents, see Upload documents.


Step 3: Initiate fine-tune 

Note: It is important to be mindful that fine-tuning may also reduce the accuracy of the Asset when it is not properly performed with the appropriate document set and annotations.

  1. On the Accuracy Result page, click Fine-tune.



  2. In the Proceed to fine-tune window that appears, click Proceed.



Step 4: Select documents

  1. In the Document Sets pane, select or search for the document set.



  2. In the right page, select the required documents to fine-tune an Extractor Asset.



Note: Select a minimum of 10 documents to proceed for fine-tune. However, we recommend having a volume of 25 documents or more to provide a higher accuracy measure.

  1. Click Proceed to annotate the documents.

Step 5: Annotate and train 

Data annotation is the process of labelling data to show the outcome you want your machine learning model to predict.

For more information on how to annotate fields, tables and sections, see Annotate field, Annotate a table and Annotate Section and Group.

Step 6 : Review results and validate

This step allows you to access the Asset’s predictions, accuracy, and confidence score. 

Additionally, you can utilise the Validate feature to evaluate the Extractor Asset’s performance on a new set of documents. 

For more information on reviewing the results and validation, see Review results and validate.

Step 7: Publish the asset

If the desired accuracy has been achieved, you can proceed to Publish the Asset. For more information on how to publish the Asset, see Publish the asset.

Note: Once the retrained Asset is published, you can download the API and its documentation. The API can be invoked independently or used within a specific Use case. If you wish to consume this Asset via API, see Consume an Asset via API.

It is recommended to use URL Aliases, if you wish to consume multiple versions of an Asset. It allows you to consume its different versions via a single API. For more information, see URL aliases.

You can also consume this asset in the Asset Monitor module. For more information, see Consume an Asset via Create Transaction.

URL aliases


The URL alias is a feature to create custom aliases for URLs associated with the APIs. This provides a more user-friendly experience when accessing the service URL.

The URL aliases allow the users to simplify and enhance the readability of the URLs used in the Asset APIs. For example, instead of using complex and lengthy URLs, users can create custom aliases that are more aligned with their Asset name. It converts the “{asset_version_id}” into user-friendly as “alias/{alias_name} ”.

Normal URL : {{baseUrl}}magicplatform/v1/invokeasset/{asset_version_id}/usecase

Alias URL : {{baseUrl}}magicplatform/v1/invokeasset/alias/{alias_name}

Additionally, URL aliases allow the users to consume different versions of an Asset via a single API. Users have the option to create the URL aliases, which helps them access multiple versions of an Asset without requiring to deploy their respective APIs. This reduces the need for multiple deployments to access different Asset versions.

This section in the Administration module allows you to create and manage the URL aliases on the platform. The following are the operations that you can perform in the URL aliases section and you must have the Administrator policy to perform these operations:

  1. Create URL aliases 
  2. Download API
  3. Edit URL aliases
  4. Delete URL aliases

Create URL aliases

  1. Head to the Administration module and then select URL aliases.



  2. In the URL aliases tab, click Create new.



  3. On the URL new aliases page that appears, enter a unique Alias name.



Note: Modifying Alias name is not applicable once the URL alias is created. 

  1. In the Asset Mapping section, select the desired Assets and its Version to consume.
  2. Click Submit to create the URL alias.

Note: Once the URL alias is created, you can download the API and its documentation. For more information on how to download the API, see Download API. 

Download API

  1. Head to the Administration module and then select URL aliases.



  2. In the URL aliases tab, select the required Alias.



  3. On the URL aliases information page that appears, click and select Download API to download its API. 



Note: You can consume the upgraded versions of an Asset using the same API. To consume different versions of an asset, you need to map the respective version in Asset Mapping.

The Downloaded API can be consumed independently. If you wish to consume this Asset via API, see Consume an Asset via API. 

Edit URL aliases

This section provides instructions on how to edit URL aliases on the platform. You can modify only the Asset Mapping information in this section. 

  1. Head to the Administration module and select URL aliases.



  2. In the URL aliases tab, select the Alias for which you wish to modify the information.



  3. On the User alias information page that appears, click Edit.



  4. Make the desired Asset mapping with the respective Assets and its Version then click Submit.



Note: Modifying Asset Mapping information will not affect the existing API information.

Delete URL aliases

This section provides instructions on how to delete existing URL aliases from the platform.

  1. Head to the Administration module and select URL aliases.



  2. In the URL aliases tab, select the URL alias you wish to delete.



  3. On the URL aliases information page that appears, click and then select Delete.



Note: Deleting a URL alias is not possible while the Asset is being consumed.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 3
  • Page 4
  • Page 5
  • Page 6
  • Page 7
  • Interim pages omitted …
  • Page 10
  • Go to Next Page »

© ­Intellect Design Arena Ltd,