Llama.cpp C++-to-Csharp wrapper from testedlines.com: C++ docs 1.0.1
Llama.cpp C++-to-Csharp wrapper is a minor extension to Llama.cpp tag b3490 codebase modified a bit by testedlines allowing it to be compiled for and called from Styled Lines Csharp unity asset store package.
Loading...
Searching...
No Matches
LlamaInfrence Class Reference

Provides an interface for inference operations on a model, such as GPT (yet any LLama.Cpp compatible with b3212 model would do. More...

#include <lib.h>

+ Collaboration diagram for LlamaInfrence:

Public Member Functions

void Echo (std::string in)
 
bool Generate (const std::string &prompt)
 Generates output based on the provided prompt.
 
std::string GetGenerated ()
 Retrieves the complete generated text after a generation call. On generation call string is emptied.
 
 LlamaInfrence (const LoggingContext *logging, const gpt_params &cfg_params)
 Sets up the model with specified configuration LLama.cpp parameters that can include file (to load from stringbuffer) and use_preloaded_file flag extensions.
 
void Stop ()
 Stops the current generation operation. To be called from callbacks for text length generation controll.
 
 ~LlamaInfrence ()
 destructor to allow for proper cleanup in derived classes. Does cleanup.
 

Static Public Member Functions

static gpt_paramsGetParameters (const std::string &llamacpp_cmd_args)
 Generates model config using a string that contains command-line-like arguments same as in original LLama.cpp documentation Mainly here for fast configurations testing.
 

Data Fields

LlamaImpl * pImpl
 Pointer to the implementation class, used by the PImpl idiom.
 

Detailed Description

Provides an interface for inference operations on a model, such as GPT (yet any LLama.Cpp compatible with b3212 model would do.

The LlamaInfrence class provides methods for setting up the model with configuration parameters, generating predictions, and handling lifecycle events such as logging and completion notifications. This class uses the PImpl (Pointer to Implementation) to hold variables not exposed to external libraries

Constructor & Destructor Documentation

◆ LlamaInfrence()

LlamaInfrence::LlamaInfrence ( const LoggingContext * logging,
const gpt_params & cfg_params )

Sets up the model with specified configuration LLama.cpp parameters that can include file (to load from stringbuffer) and use_preloaded_file flag extensions.

Parameters
cfg_paramsA reference to a gpt_params struct containing the setup parameters for the model.
Returns
True if setup was successful, false otherwise.

◆ ~LlamaInfrence()

LlamaInfrence::~LlamaInfrence ( )

destructor to allow for proper cleanup in derived classes. Does cleanup.

Member Function Documentation

◆ Echo()

void LlamaInfrence::Echo ( std::string in)

◆ Generate()

bool LlamaInfrence::Generate ( const std::string & prompt)

Generates output based on the provided prompt.

Parameters
promptThe input prompt to the model.
Returns
True if the generation was successful, false otherwise.

◆ GetGenerated()

std::string LlamaInfrence::GetGenerated ( )

Retrieves the complete generated text after a generation call. On generation call string is emptied.

Returns
A string containing the generated text.

◆ GetParameters()

static gpt_params * LlamaInfrence::GetParameters ( const std::string & llamacpp_cmd_args)
static

Generates model config using a string that contains command-line-like arguments same as in original LLama.cpp documentation Mainly here for fast configurations testing.

Parameters
llamacpp_cmd_argsA string of command line styled arguments to configure the model.
Returns
gpt_params* if config generation was successful.

◆ Stop()

void LlamaInfrence::Stop ( )

Stops the current generation operation. To be called from callbacks for text length generation controll.

Field Documentation

◆ pImpl

LlamaImpl* LlamaInfrence::pImpl

Pointer to the implementation class, used by the PImpl idiom.


The documentation for this class was generated from the following file: