Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
60 changes: 60 additions & 0 deletions github-star-organizer/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# GitHub Star Organizer (Agent Skill)

An AI-powered agent skill designed to automatically categorize and organize your GitHub stars into native **GitHub Star Lists**.

## 🌟 Key Features

- **AI-Powered Classification**: Automatically identifies the purpose of repositories (e.g., AI, Frontend, DevOps) and groups them logically.
- **Native Integration**: Uses the official GitHub GraphQL API to create and manage Star Lists directly on your account.
- **Scalable**: Handles both small collections and large sets of stars (hundreds or thousands) with optimized workflows.
- **Permission Auto-fix**: Includes built-in logic to detect and help refresh missing GitHub CLI scopes.

## 📋 Prerequisites

1. **GitHub CLI (`gh`)**: Must be installed and authenticated.
2. **Required Scopes**: Your GitHub token must have the `user` scope (required for Star Lists).
- Check status: `gh auth status`
- Refresh scope: `gh auth refresh -s user`
3. **Python 3**: Required to run the background processing scripts.

## 🚀 Workflow

When you activate this skill in Gemini CLI or a compatible agent, it follows these steps:

### 1. Verification
The agent checks if you are logged in via `gh` and ensures you have the correct permissions.

### 2. Fetching Stars
The agent runs a script to retrieve all your starred repositories:
```bash
python3 scripts/get_all_stars.py
```

### 3. Categorization
- **Small Sets**: For fewer than 50 stars, the agent can propose categories directly in the chat.
- **Large Sets (>50)**: To avoid context limits, the agent uses a programmatic approach to map stars to categories in `stars_with_category.json`.
- **Customization**: You can specify preferred languages for category names or request specific topics.

### 4. Syncing to GitHub
Once you approve the categorization, the agent executes the sync script:
```bash
python3 scripts/organizer.py
```
This script creates the necessary Star Lists on GitHub and adds the repositories to them.

## 📂 File Structure

- `SKILL.md`: The core definition file that instructs the AI on how to handle the organization process.
- `scripts/get_all_stars.py`: Fetches starred repo data using GitHub GraphQL.
- `scripts/organizer.py`: Creates Star Lists and performs the final synchronization.
- `scripts/util.py`: Shared utilities including GraphQL wrappers and permission handlers.
- `stars_with_category.json`: The intermediate data file stores classifications before syncing.

## 💡 Tips

- **Review Before Sync**: You can manually edit `stars_with_category.json` if you want to fine-tune the AI's suggestions before they are pushed to GitHub.
- **Clean Lists**: The skill defaults to ensuring categories have enough projects to keep your Star Lists tidy.
- **Incremental Updates**: The sync script detects existing lists and avoids duplicates.

---
*Developed for use with Gemini CLI and compatible AI Agents.*
79 changes: 79 additions & 0 deletions github-star-organizer/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
---
name: github-star-organizer
description: Automatically categorize and organize GitHub stars into native GitHub Star Lists using AI-powered classification. Make sure to use this skill whenever the user mentions organizing stars, cleaning up GitHub, managing starred repositories, or grouping stars by topic, even if they don't explicitly ask for "Lists".
author: luoage
version: 1.1.0
homepage: https://github.com/luoage/github-star-organizer
license: MIT
requires:
- gh (GitHub CLI)
- python3
- GitHub token with 'user' scope
trigger_keywords:
- organize stars
- categorize my GitHub stars
- manage starred repos
- clean up GitHub stars
- group stars by topic
---

## Purpose
Automatically classify your GitHub stars into meaningful categories (e.g., "AI", "DevOps", "Frontend") using AI inference, then create and populate **GitHub Star Lists**—a native GitHub feature for organizing stars.

> ✅ Ideal for managing hundreds or thousands of unorganized stars efficiently.

## Workflow

1. **Verify Authentication**
- Check status: `gh auth status`
- Login if needed: `gh auth login`
- Refresh scope if `user` permission is missing: `gh auth refresh -s user`

2. **Set User Preferences**
- Ask for the preferred language for category names (e.g., English or Chinese).
- Determine if the user has specific categories in mind or wants the AI to suggest them.

3. **Fetch All Starred Repositories**
Execute the retrieval script:
```bash
python3 github-star-organizer/scripts/get_all_stars.py
```
- This retrieves all starred repositories via the GitHub CLI.

4. **Categorize Stars (AI-Powered)**
- **For small lists (< 50 stars):** The AI can generate the `stars_with_category.json` directly.
- **For large lists (> 50 stars):** To avoid truncation and ensure all stars are processed, **always use a Python script** to perform the categorization. Use keywords or a local LLM call to map each star to a `categorizeName`.
- Ensure each category contains at least 4 projects to avoid cluttered lists.
- The final output must be saved to `stars_with_category.json` in this format:
```json
[
{
"id": "MDEwOlJlcG9zaXRvcnkxNDEzNDky",
"nameWithOwner": "defunkt/jquery-pjax",
"description": "pushState + ajax = pjax",
"categorizeName": "Web Frontend"
}
]
```

5. **Review and Refine**
- Present the proposed categories and sample mappings to the user.
- Allow the user to adjust category names or group assignments until satisfied.

6. **Sync to GitHub Star Lists**
Execute the sync script:
```bash
python3 scripts/organizer.py
```
- Reads `stars_with_category.json`.
- Creates new GitHub Star Lists as needed.
- Adds repositories to their respective lists via the GitHub API.

## Files Included
1. `scripts/get_all_stars.py` – Fetches all starred repos using `gh api`.
2. `scripts/organizer.py` – Manages GitHub Lists via REST/GraphQL API.
3. `stars_with_category.json` – The intermediate classification file.

## Troubleshooting
- **Missing Stars?** If only a fraction of stars were processed, it was likely due to manual categorization hitting output limits. Re-run the categorization using a programmatic script as described in step 4.
- **Permission Errors?** Ensure your GitHub token has the `user` scope. Run `gh auth refresh -s user` to fix.
35 changes: 35 additions & 0 deletions github-star-organizer/scripts/get_all_stars.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
import json
from util import run_gh_api


def get_all_stars():
"""Get all GitHub Stars"""
stars = []
has_next = True
cursor = None
query = """
query($cursor: String) {
viewer {
starredRepositories(first: 100, after: $cursor) {
pageInfo { hasNextPage endCursor }
nodes { id nameWithOwner description }
}
}
}
"""
while has_next:
vars = {"cursor": cursor} if cursor else {}
data = run_gh_api(query, vars)
if not data or 'data' not in data:
break
repo_data = data['data']['viewer']['starredRepositories']
stars.extend(repo_data['nodes'])
has_next = repo_data['pageInfo']['hasNextPage']
cursor = repo_data['pageInfo']['endCursor']

return stars


if __name__ == "__main__":
stars = get_all_stars()
print(json.dumps(stars))
88 changes: 88 additions & 0 deletions github-star-organizer/scripts/organizer.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
import os
import json
import time
import sys
from util import run_gh_api

# Get the directory where the script is located
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
# stars_with_category.json is in the parent directory of scripts/
STARS_FILE = os.path.join(os.path.dirname(SCRIPT_DIR), 'stars_with_category.json')

def get_existing_lists():
query = """
query {
viewer {
lists(first: 100) {
nodes { id name }
}
}
}
"""
data = run_gh_api(query)
if data and 'data' in data and 'viewer' in data['data']:
return {item['name']: item['id'] for item in data['data']['viewer']['lists']['nodes']}
return {}

def create_list(name):
query = """
mutation($name: String!) {
createUserList(input: { name: $name }) {
list { id name }
}
}
"""
res = run_gh_api(query, {"name": name})
if res and 'data' in res and res['data'].get('createUserList'):
return res['data']['createUserList']['list']['id']
return None

def add_to_list(list_id, repo_id):
query = """
mutation($repoId: ID!, $listIds: [ID!]!) {
updateUserListsForItem(input: { itemId: $repoId, listIds: $listIds }) { __typename }
}
"""
return run_gh_api(query, {"repoId": repo_id, "listIds": list_id})
Comment thread
vercel[bot] marked this conversation as resolved.

def main():
if not os.path.exists(STARS_FILE):
print(f"❌ Error: {STARS_FILE} not found. Please run the categorization step first.")
sys.exit(1)

with open(STARS_FILE, 'r', encoding='utf-8') as f:
stars = json.load(f)

list_cache = get_existing_lists()
print(f"📂 Found {len(list_cache)} existing lists.")

print("\nStarting classification and syncing to GitHub...")
for i, repo in enumerate(stars):
category = repo["categorizeName"]

if category not in list_cache:
new_id = create_list(category)
if new_id:
list_cache[category] = new_id
print(f" ✨ Created new list: '{category}'")
else:
print(f" ⚠️ Failed to create '{category}', skipping {repo['nameWithOwner']}.")
continue

res = add_to_list(list_cache[category], repo['id'])

status_msg = f"[{i+1}/{len(stars)}] {repo['nameWithOwner']} -> '{category}'"
if res and 'errors' in res:
if "already in list" in str(res['errors']):
print(f" ⏭️ {status_msg} (already in list)")
else:
print(f" ❌ {status_msg} (failed: {res['errors'][0]['message'][:50]}...)")
else:
print(f" ✅ {status_msg}")

time.sleep(0.3)

print("\n🎉 All tasks completed! You can now refresh your GitHub Stars page.")

if __name__ == "__main__":
main()
26 changes: 26 additions & 0 deletions github-star-organizer/scripts/util.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
import json
import subprocess
def run_gh_api(query, variables=None, auto_fix=True):
cmd = ["gh", "api", "graphql", "-f", f"query={query}"]
if variables:
for k, v in variables.items():
cmd.extend(["-F", f"{k}={v}"])

result = subprocess.run(cmd, capture_output=True, text=True)

# Detect insufficient scopes or missing fields (usually because 'user' scope is missing)
if result.returncode != 0:
err_msg = result.stderr
if auto_fix and ("insufficient scopes" in err_msg.lower() or "doesn't exist on type" in err_msg.lower()):
print("\n🔑 Insufficient permissions detected (missing 'user' scope), attempting to auto-fix...")
print("👉 Please complete GitHub authorization in the opened browser window.")
try:
# Attempt to refresh permissions
subprocess.run(["gh", "auth", "refresh", "-s", "user"], check=True)
print("✅ Permissions refreshed successfully, retrying operation...\n")
return run_gh_api(query, variables, auto_fix=False) # Retry
except subprocess.CalledProcessError:
print("❌ Permission refresh failed. Please run 'gh auth refresh -s user' manually and try again.")

return {"errors": [{"message": err_msg}]}
return json.loads(result.stdout)