Exploring Pygmalion: The New Contender in Language Models
Enthusiasm is building in the OpenAI community for Pygmalion, a cleverly named new language model. While initial responses vary, the community is undeniably eager to delve into its capabilities and quirks.
Pygmalion exhibits some unique characteristics, particularly in role-playing scenarios. It's been found to generate frequent emotive responses, similar to its predecessor, Pygmalion 7B from TavernAI. However, some users argue that it's somewhat less coherent than its cousin, Wizard Vicuna 13B uncensored, as it tends to produce responses based more on the general context than specific prompts.
Interestingly, unlike older Pygmalion models, the new iterations come without the previous explicit content warning. It remains to be seen whether this indicates a shift in model behaviour or a broader acceptance of potential NSFW outputs from such models.
The community is also keenly awaiting the release of a 4B variant of this model, with users eager to share updates as soon as they're available.
Despite some unresolved questions, such as the complexities around merging with the llama weights, anticipation remains high. Community members are working on resolving these issues and sharing updated versions of Pygmalion for public use. Keep an eye on these links for updates: Pygmalion-13b-Merged, Metharme-13b-Merged.
This buzz truly exemplifies the resilience and collective ingenuity of the OpenAI community, highlighting its capacity to thrive and innovate in the face of challenges.
Tags: OpenAI, Pygmalion