LLM Evaluation

LLM INFO AND PARAMETERS

LLM IEvaluation

1. Model Details:

a. Model Name: [LLM full Name]

b. Creators: [Organization/Individual(s) Responsible]

c. Architecture and Type: [Generative Pre-trained Transformer (GPT), Masked Language Model (MLM), Encoder only, Decoder Only, Both Encoder and Decoder etc.]

d. Number of Experts: [If Mixture of Experts (MoE) used, example 8x etc.]

e. Modality: [Multimodal, Audio, Video, Text etc.]

f. Model Size: [Number of Parameters, example 70B, 300B etc.]

g. Year Launched: [Year of Release]

h. Access: [Open-source, Closed-source, etc.]

i. License: [Specific License Type]

2. Hardware and Software:

a. Training Hardware: [How much is Used for Training (Number of TPUs, GPUs, etc.)]

b. Inference Hardware: [Hardware Recommended for Running the Model]

c. Software Frameworks: [Libraries or Frameworks Used (TensorFlow, PyTorch, etc.)]

3. Training Datasets:

a. Dataset (s) Name: [Names of the Datasets Used for Training]

b. Type of Data: [Text, Code, Video, Audio, etc.]

c. Size of Dataset: [In Terabytes (TB), Petabytes (PB), Exabytes (EB), Zettabytes (ZB) etc.]

d. Number of Tokens: [Number of Tokens it has been trained on]

4. Intended Use:

5. Number of Attention Heads:

6. Size Context Window:

7. Training Cost per Token:

8. Total Training Cost: [No. of Tokens x Training Cost per Token]

9. Inference Cost per Token:

10. Ethical Considerations and Limitations:

a. Potential Biases: [Known Biases in the Model]

b. Limitations: [Capabilities and Limitations of the Model]

c. Safety Measures: [Measures Implemented to Mitigate Risks]

11. Fine-tuned Models: [List of Available Fine-tuned Models with Specific Use Cases]

12. Additional Information:

a. Instructions on How to Report Issues with the Model

b. Links to Relevant Resources [Official Website, Documentation, Research Paper, etc.]

Last updated