Brother
Professional
- Messages
- 2,590
- Reaction score
- 500
- Points
- 83
The process of writing high-quality code has become faster and more enjoyable.
Meta has released an update to its Code Llama model for code generation, codenamed 70B, which can be interpreted as 70 billion parameters.
As the company stated in its updated post, the Code Llama 70B model is "the largest and most efficient at the moment." Code Llama's tools were first launched in August last year and are available for both research and commercial use.
Code Llama 70B is now more accurate and can handle more requests compared to previous versions. This means that developers can ask it more hints during the programming process and, of course, get better answers.
According to the results of testing in the HumanEval benchmark, the accuracy of Code Llama 70B was 53%, which is slightly higher than the GPT-3.5 model from OpenAI (48.1%), but still falls short of the 67% that GPT-4 has.
"We are releasing a new improved version of Code Llama, which includes over 70 billion parameters," Mark Zuckerberg said in a post on Facebook, "I am proud of the progress in this direction and look forward to including these improvements in Llama 3 and future models."
Code Llama 70B is still free for scientific and commercial purposes. 1 TB of code-related data was used to train the language model.
Meta said their large 34B and 70B language models "return the best results and provide the highest quality programming assistance."
Meta has released an update to its Code Llama model for code generation, codenamed 70B, which can be interpreted as 70 billion parameters.
As the company stated in its updated post, the Code Llama 70B model is "the largest and most efficient at the moment." Code Llama's tools were first launched in August last year and are available for both research and commercial use.
Code Llama 70B is now more accurate and can handle more requests compared to previous versions. This means that developers can ask it more hints during the programming process and, of course, get better answers.
According to the results of testing in the HumanEval benchmark, the accuracy of Code Llama 70B was 53%, which is slightly higher than the GPT-3.5 model from OpenAI (48.1%), but still falls short of the 67% that GPT-4 has.
"We are releasing a new improved version of Code Llama, which includes over 70 billion parameters," Mark Zuckerberg said in a post on Facebook, "I am proud of the progress in this direction and look forward to including these improvements in Llama 3 and future models."
Code Llama 70B is still free for scientific and commercial purposes. 1 TB of code-related data was used to train the language model.
Meta said their large 34B and 70B language models "return the best results and provide the highest quality programming assistance."