Skip to content

Natural Language Inference

Bases: TextAPI

Represents a Natural Language Inference (NLI) API leveraging Hugging Face's transformer models. This class is capable of handling various NLI tasks such as entailment, classification, similarity checking, and more. Utilizes CherryPy for exposing API endpoints that can be interacted with via standard HTTP requests.

Attributes:

Name Type Description
model AutoModelForSequenceClassification

The loaded Hugging Face model for sequence classification tasks.

tokenizer AutoTokenizer

The tokenizer corresponding to the model, used for processing input text.

CLI Usage Example: For interacting with the NLI API, you would typically start the server using a command similar to one listed in the provided examples. After the server is running, you can use CURL commands to interact with the different endpoints.

Example:

genius NLIAPI rise \
    batch \
        --input_folder ./input \
    batch \
        --output_folder ./output \
    none \
    --id "MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7-lol" \
    listen \
        --args \
            model_name="MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7" \
            model_class="AutoModelForSequenceClassification" \
            tokenizer_class="AutoTokenizer" \
            use_cuda=True \
            precision="float" \
            quantization=0 \
            device_map="cuda:0" \
            max_memory=None \
            torchscript=False \
            endpoint="*" \
            port=3000 \
            cors_domain="http://localhost:3000" \
            username="user" \
            password="password"

__init__(input, output, state, **kwargs)

Initializes the NLIAPI with configurations for handling input, output, and state management.

Parameters:

Name Type Description Default
input BatchInput

Configuration for the input data.

required
output BatchOutput

Configuration for the output data.

required
state State

State management for the API.

required
**kwargs Any

Additional keyword arguments for extended functionality.

{}

classify(**kwargs)

Endpoint for classifying the input text into one of the provided candidate labels using zero-shot classification.

Parameters:

Name Type Description Default
**kwargs Any

Arbitrary keyword arguments, typically containing 'text' and 'candidate_labels'.

{}

Returns:

Type Description
Dict[str, Any]

Dict[str, Any]: A dictionary containing the input text, candidate labels, and classification scores.

Example CURL Request:

curl -X POST localhost:3000/api/v1/classify \
    -H "Content-Type: application/json" \
    -d '{
        "text": "The new movie is a thrilling adventure in space",
        "candidate_labels": ["entertainment", "politics", "business"]
    }'

detect_intent(**kwargs)

Detects the intent of the input text from a list of possible intents.

Parameters:

Name Type Description Default
text str

The input text.

required
intents List[str]

A list of possible intents.

required

Returns:

Type Description
Dict[str, Any]

Dict[str, Any]: A dictionary containing the input text and detected intent with its score.

Example CURL Request:

/usr/bin/curl -X POST localhost:3000/api/v1/detect_intent \
    -H "Content-Type: application/json" \
    -d '{
        "text": "Theres something magical about Recurrent Neural Networks (RNNs). I still remember when I trained my first recurrent network for Image Captioning. Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense. Sometimes the ratio of how simple your model is to the quality of the results you get out of it blows past your expectations, and this was one of those times. What made this result so shocking at the time was that the common wisdom was that RNNs were supposed to be difficult to train (with more experience Ive in fact reached the opposite conclusion). Fast forward about a year: Im training RNNs all the time and Ive witnessed their power and robustness many times, and yet their magical outputs still find ways of amusing me.",
        "intents": ["teach","sell","note","advertise","promote"]
    }' | jq

entailment(**kwargs)

Endpoint for evaluating the entailment relationship between a premise and a hypothesis. It returns the relationship scores across possible labels like entailment, contradiction, and neutral.

Parameters:

Name Type Description Default
**kwargs Any

Arbitrary keyword arguments, typically containing 'premise' and 'hypothesis'.

{}

Returns:

Type Description
Dict[str, Any]

Dict[str, Any]: A dictionary containing the premise, hypothesis, and their relationship scores.

Example CURL Request:

/usr/bin/curl -X POST localhost:3000/api/v1/entailment \
    -H "Content-Type: application/json" \\\
    -d '{
        "premise": "This a very good entry level smartphone, battery last 2-3 days after fully charged when connected to the internet. No memory lag issue when playing simple hidden object games. Performance is beyond my expectation, i bought it with a good bargain, couldnt ask for more!",
        "hypothesis": "the phone has an awesome battery life"
    }' | jq
```

fact_checking(**kwargs)

Performs fact checking on a statement given a context.

Parameters:

Name Type Description Default
context str

The context or background information.

required
statement str

The statement to fact check.

required

Returns:

Type Description
Dict[str, Any]

Dict[str, Any]: A dictionary containing fact checking scores.

Example CURL Request:

/usr/bin/curl -X POST localhost:3000/api/v1/fact_checking \
    -H "Content-Type: application/json" \
    -d '{
        "context": "Theres something magical about Recurrent Neural Networks (RNNs). I still remember when I trained my first recurrent network for Image Captioning. Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense. Sometimes the ratio of how simple your model is to the quality of the results you get out of it blows past your expectations, and this was one of those times. What made this result so shocking at the time was that the common wisdom was that RNNs were supposed to be difficult to train (with more experience Ive in fact reached the opposite conclusion). Fast forward about a year: Im training RNNs all the time and Ive witnessed their power and robustness many times, and yet their magical outputs still find ways of amusing me.",
        "statement": "The author is looking for a home loan"
    }' | jq

initialize_pipeline()

Lazy initialization of the NLI Hugging Face pipeline.

question_answering(**kwargs)

Performs question answering for multiple choice questions.

Parameters:

Name Type Description Default
question str

The question text.

required
choices List[str]

A list of possible answers.

required

Returns:

Type Description
Dict[str, Any]

Dict[str, Any]: A dictionary containing the scores for each answer choice.

Example CURL Request:

/usr/bin/curl -X POST localhost:3000/api/v1/question_answering \
    -H "Content-Type: application/json" \
    -d '{
        "question": "[ML-1T-2] is the dimensional formula of",
        "choices": ["force", "coefficient of friction", "modulus of elasticity", "energy"]
    }' | jq

textual_similarity(**kwargs)

Evaluates the textual similarity between two texts.

Parameters:

Name Type Description Default
text1 str

The first text.

required
text2 str

The second text.

required

Returns:

Type Description
Dict[str, Any]

Dict[str, Any]: A dictionary containing similarity score.

Example CURL Request:

/usr/bin/curl -X POST localhost:3000/api/v1/textual_similarity \
    -H "Content-Type: application/json" \
    -d '{
        "text1": "Theres something magical about Recurrent Neural Networks (RNNs). I still remember when I trained my first recurrent network for Image Captioning. Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense. Sometimes the ratio of how simple your model is to the quality of the results you get out of it blows past your expectations, and this was one of those times. What made this result so shocking at the time was that the common wisdom was that RNNs were supposed to be difficult to train (with more experience Ive in fact reached the opposite conclusion). Fast forward about a year: Im training RNNs all the time and Ive witnessed their power and robustness many times, and yet their magical outputs still find ways of amusing me.",
        "text2": "There is something magical about training neural networks. Their simplicity coupled with their power is astonishing."
    }' | jq

zero_shot_classification(**kwargs)

Performs zero-shot classification using the Hugging Face pipeline. It allows classification of text without explicitly provided labels.

Parameters:

Name Type Description Default
**kwargs Any

Arbitrary keyword arguments, typically containing 'premise' and 'hypothesis'.

{}

Returns:

Type Description
Dict[str, Any]

Dict[str, Any]: A dictionary containing the premise, hypothesis, and their classification scores.

Example CURL Request for zero-shot classification:

curl -X POST localhost:3000/api/v1/zero_shot_classification             -H "Content-Type: application/json"             -d '{
        "premise": "A new study shows that the Mediterranean diet is good for heart health.",
        "hypothesis": "The study is related to diet and health."
    }' | jq