Refactor: Integrate backend API and normalize data
This commit integrates the backend API for fetching and updating report data. It also includes a normalization function to handle data consistency between the API and local storage. Co-authored-by: anthonymuncher <anthonymuncher@gmail.com>
This commit is contained in:
Submodule FixMate-Backend deleted from 2be6df7e34
181
backend/Readme.md
Normal file
181
backend/Readme.md
Normal file
@@ -0,0 +1,181 @@
|
|||||||
|
Perfect 👍 thanks for clarifying — let’s keep it **venv only**. I’ll adjust the README so your teammates can just follow **one clean workflow** with `python -m venv`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# 🛠️ FixMate Backend – Hackathon Prototype
|
||||||
|
|
||||||
|
Smart citizen-driven urban maintenance platform powered by **Computer Vision + Generative AI**.
|
||||||
|
This backend runs fully **locally** (no cloud required).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🚀 Features
|
||||||
|
|
||||||
|
* Citizen submits an image of an issue (pothole, streetlight, trash, signage).
|
||||||
|
* AI auto-classifies the issue + assigns severity.
|
||||||
|
* Ticket saved in local SQLite DB.
|
||||||
|
* API endpoints for citizens (report/status) and admins (tickets/analytics).
|
||||||
|
* Supports both **CPU-only** (safe) and **GPU-accelerated** (NVIDIA CUDA).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📦 Requirements
|
||||||
|
|
||||||
|
* Python **3.11+** (works on 3.8–3.12)
|
||||||
|
* `venv` for virtual environment
|
||||||
|
* (Optional) NVIDIA GPU with CUDA 11.8 or 12.1 drivers
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ⚙️ Setup Instructions
|
||||||
|
|
||||||
|
### 1. Clone repository
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clone https://github.com/yourteam/fixmate-backend.git
|
||||||
|
cd fixmate-backend
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Create & activate virtual environment
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -m venv venv
|
||||||
|
```
|
||||||
|
|
||||||
|
**Windows (PowerShell):**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
venv\Scripts\activate
|
||||||
|
```
|
||||||
|
|
||||||
|
**Linux/macOS:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
source venv/bin/activate
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 3. Install dependencies
|
||||||
|
|
||||||
|
#### Option A – CPU only (safe for all laptops)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install -r requirements.txt
|
||||||
|
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Option B – GPU (if you have NVIDIA + CUDA)
|
||||||
|
|
||||||
|
Check your driver version:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
nvidia-smi
|
||||||
|
```
|
||||||
|
|
||||||
|
* If CUDA 12.1:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install -r requirements.txt
|
||||||
|
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
|
||||||
|
```
|
||||||
|
|
||||||
|
* If CUDA 11.8:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install -r requirements.txt
|
||||||
|
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🧪 Verify Setup
|
||||||
|
|
||||||
|
Run the PyTorch check script:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python Backend/test/check_torch.py
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected output:
|
||||||
|
|
||||||
|
* CPU build:
|
||||||
|
|
||||||
|
```
|
||||||
|
🔥 PyTorch version: 2.8.0+cpu
|
||||||
|
🖥️ CUDA available: False
|
||||||
|
```
|
||||||
|
* GPU build:
|
||||||
|
|
||||||
|
```
|
||||||
|
🔥 PyTorch version: 2.8.0
|
||||||
|
🖥️ CUDA available: True
|
||||||
|
-> GPU name: NVIDIA GeForce RTX 3060
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ▶️ Run Backend Server
|
||||||
|
|
||||||
|
```bash
|
||||||
|
uvicorn app.main:app --reload
|
||||||
|
```
|
||||||
|
|
||||||
|
Open Swagger API docs at:
|
||||||
|
👉 [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📷 Test ML Detection
|
||||||
|
|
||||||
|
Run detection test on a sample image:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python Backend/test/test_detect.py --image ./test_images/pothole.jpg
|
||||||
|
```
|
||||||
|
|
||||||
|
Outputs:
|
||||||
|
|
||||||
|
* If YOLO model works → JSON with detections.
|
||||||
|
* If fallback → Heuristic result (pothole-like / dark-image).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📂 Project Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
fixmate-backend/
|
||||||
|
│── README.md
|
||||||
|
│── requirements.txt
|
||||||
|
│── models/ # YOLO weights (downloaded here)
|
||||||
|
│── data/ # SQLite DB + sample images
|
||||||
|
│── app/
|
||||||
|
│ ├── main.py # FastAPI entrypoint
|
||||||
|
│ ├── models.py # SQLAlchemy models
|
||||||
|
│ ├── schemas.py # Pydantic schemas
|
||||||
|
│ ├── database.py # DB connection (SQLite)
|
||||||
|
│ ├── routes/ # API routes
|
||||||
|
│ └── services/ # AI + ticket logic
|
||||||
|
│── Backend/test/
|
||||||
|
│ ├── check_torch.py # Verify torch GPU/CPU
|
||||||
|
│ └── test_detect.py # Run YOLO/heuristic on image
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 👥 Team Notes
|
||||||
|
|
||||||
|
* First run may take time (downloads YOLO weights into `./models/`).
|
||||||
|
* Keep everything local (SQLite + images) for hackathon.
|
||||||
|
* If no GPU available, always use CPU build.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# References
|
||||||
|
1) https://pyimagesearch.com/2025/07/21/training-yolov12-for-detecting-pothole-severity-using-a-custom-dataset/?utm_source=chatgpt.com
|
||||||
|
2) https://universe.roboflow.com/aegis/pothole-detection-i00zy/dataset/2#
|
||||||
|
|
||||||
|
👉 Do you want me to now also generate the **`requirements.txt`** file that matches this README so you don’t have to guess the dependencies?
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
43
backend/app/database.py
Normal file
43
backend/app/database.py
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
# app/database.py
|
||||||
|
import os
|
||||||
|
from sqlalchemy import create_engine
|
||||||
|
from sqlalchemy.orm import sessionmaker, declarative_base
|
||||||
|
import logging
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# Logging Configuration
|
||||||
|
# ----------------------
|
||||||
|
logging.basicConfig(level=logging.INFO, format="%(asctime)s [%(levelname)s] %(message)s")
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# Database Configuration
|
||||||
|
# ----------------------
|
||||||
|
DB_PATH = os.environ.get("FIXMATE_DB", "app/db/fixmate.db")
|
||||||
|
os.makedirs(os.path.dirname(DB_PATH), exist_ok=True)
|
||||||
|
DATABASE_URL = f"sqlite:///{DB_PATH}"
|
||||||
|
|
||||||
|
engine = create_engine(
|
||||||
|
DATABASE_URL,
|
||||||
|
connect_args={"check_same_thread": False}, # Required for SQLite
|
||||||
|
echo=False # Set True for debugging SQL queries
|
||||||
|
)
|
||||||
|
|
||||||
|
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
|
||||||
|
Base = declarative_base()
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# Dependency
|
||||||
|
# ----------------------
|
||||||
|
def get_db():
|
||||||
|
"""
|
||||||
|
Yield a database session for FastAPI dependency injection.
|
||||||
|
Example usage in route:
|
||||||
|
db: Session = Depends(get_db)
|
||||||
|
"""
|
||||||
|
db = SessionLocal()
|
||||||
|
try:
|
||||||
|
yield db
|
||||||
|
finally:
|
||||||
|
db.close()
|
||||||
|
|
||||||
|
logging.info(f"Database initialized at {DB_PATH}")
|
||||||
BIN
backend/app/db/fixmate.db
Normal file
BIN
backend/app/db/fixmate.db
Normal file
Binary file not shown.
BIN
backend/app/models/classification/best_model.pth
Normal file
BIN
backend/app/models/classification/best_model.pth
Normal file
Binary file not shown.
8
backend/app/models/classification/class_mapping.json
Normal file
8
backend/app/models/classification/class_mapping.json
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"0": "broken_streetlight",
|
||||||
|
"1": "drainage",
|
||||||
|
"2": "garbage",
|
||||||
|
"3": "pothole",
|
||||||
|
"4": "signage",
|
||||||
|
"5": "streetlight"
|
||||||
|
}
|
||||||
BIN
backend/app/models/detection/best_severity_check.pt
Normal file
BIN
backend/app/models/detection/best_severity_check.pt
Normal file
Binary file not shown.
BIN
backend/app/models/last_sevearity_check.pt
Normal file
BIN
backend/app/models/last_sevearity_check.pt
Normal file
Binary file not shown.
74
backend/app/models/ticket_model.py
Normal file
74
backend/app/models/ticket_model.py
Normal file
@@ -0,0 +1,74 @@
|
|||||||
|
import uuid
|
||||||
|
from sqlalchemy import Column, String, Float, Enum, DateTime, ForeignKey, Index
|
||||||
|
from sqlalchemy.orm import relationship
|
||||||
|
from sqlalchemy.sql import func
|
||||||
|
from app.database import Base
|
||||||
|
import enum
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# Enums
|
||||||
|
# ----------------------
|
||||||
|
class TicketStatus(str, enum.Enum):
|
||||||
|
NEW = "New"
|
||||||
|
IN_PROGRESS = "In Progress"
|
||||||
|
FIXED = "Fixed"
|
||||||
|
|
||||||
|
class SeverityLevel(str, enum.Enum):
|
||||||
|
LOW = "Low"
|
||||||
|
MEDIUM = "Medium"
|
||||||
|
HIGH = "High"
|
||||||
|
NA = "N/A"
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# User Model
|
||||||
|
# ----------------------
|
||||||
|
class User(Base):
|
||||||
|
__tablename__ = "users"
|
||||||
|
|
||||||
|
id = Column(String, primary_key=True, default=lambda: str(uuid.uuid4()), index=True)
|
||||||
|
name = Column(String, nullable=False)
|
||||||
|
email = Column(String, unique=True, nullable=False)
|
||||||
|
|
||||||
|
tickets = relationship("Ticket", back_populates="user", cascade="all, delete-orphan")
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return f"<User(id={self.id}, name={self.name}, email={self.email})>"
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# Ticket Model
|
||||||
|
# ----------------------
|
||||||
|
class Ticket(Base):
|
||||||
|
__tablename__ = "tickets"
|
||||||
|
|
||||||
|
id = Column(String, primary_key=True, default=lambda: str(uuid.uuid4()), index=True)
|
||||||
|
user_id = Column(String, ForeignKey("users.id", ondelete="CASCADE"), nullable=False)
|
||||||
|
image_path = Column(String, nullable=False)
|
||||||
|
category = Column(String, nullable=False)
|
||||||
|
severity = Column(Enum(SeverityLevel), nullable=False, default=SeverityLevel.NA)
|
||||||
|
description = Column(String, default="")
|
||||||
|
status = Column(Enum(TicketStatus), nullable=False, default=TicketStatus.NEW)
|
||||||
|
latitude = Column(Float, nullable=False)
|
||||||
|
longitude = Column(Float, nullable=False)
|
||||||
|
created_at = Column(DateTime(timezone=True), server_default=func.now())
|
||||||
|
updated_at = Column(DateTime(timezone=True), server_default=func.now(), onupdate=func.now())
|
||||||
|
|
||||||
|
user = relationship("User", back_populates="tickets")
|
||||||
|
|
||||||
|
__table_args__ = (
|
||||||
|
Index("idx_category_status", "category", "status"),
|
||||||
|
)
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return f"<Ticket(id={self.id}, category={self.category}, severity={self.severity}, status={self.status}, user_id={self.user_id})>"
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# Ticket Audit Model
|
||||||
|
# ----------------------
|
||||||
|
class TicketAudit(Base):
|
||||||
|
__tablename__ = "ticket_audit"
|
||||||
|
|
||||||
|
id = Column(String, primary_key=True, default=lambda: str(uuid.uuid4()))
|
||||||
|
ticket_id = Column(String, ForeignKey("tickets.id", ondelete="CASCADE"))
|
||||||
|
old_status = Column(Enum(TicketStatus))
|
||||||
|
new_status = Column(Enum(TicketStatus))
|
||||||
|
updated_at = Column(DateTime(timezone=True), server_default=func.now())
|
||||||
BIN
backend/app/models/yolov12n.pt
Normal file
BIN
backend/app/models/yolov12n.pt
Normal file
Binary file not shown.
64
backend/app/routes/analytics.py
Normal file
64
backend/app/routes/analytics.py
Normal file
@@ -0,0 +1,64 @@
|
|||||||
|
# app/routes/analytics.py
|
||||||
|
from fastapi import APIRouter, Depends
|
||||||
|
from sqlalchemy.orm import Session
|
||||||
|
from sqlalchemy import func
|
||||||
|
from app.database import get_db
|
||||||
|
from app.models.ticket_model import Ticket, SeverityLevel, TicketStatus
|
||||||
|
from typing import Dict, Any
|
||||||
|
|
||||||
|
router = APIRouter()
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# GET /analytics
|
||||||
|
# ----------------------
|
||||||
|
@router.get("/analytics", response_model=Dict[str, Any])
|
||||||
|
def analytics(db: Session = Depends(get_db), cluster_size: float = 0.01):
|
||||||
|
"""
|
||||||
|
Returns summary statistics for tickets:
|
||||||
|
- Total tickets
|
||||||
|
- Counts by category
|
||||||
|
- Counts by severity
|
||||||
|
- Counts by status
|
||||||
|
- Optional: location clustering (hotspots) using grid-based approach
|
||||||
|
"""
|
||||||
|
# Total tickets
|
||||||
|
total_tickets = db.query(func.count(Ticket.id)).scalar()
|
||||||
|
|
||||||
|
# Counts by category
|
||||||
|
category_counts = dict(
|
||||||
|
db.query(Ticket.category, func.count(Ticket.id))
|
||||||
|
.group_by(Ticket.category)
|
||||||
|
.all()
|
||||||
|
)
|
||||||
|
|
||||||
|
# Counts by severity
|
||||||
|
severity_counts = dict(
|
||||||
|
db.query(Ticket.severity, func.count(Ticket.id))
|
||||||
|
.group_by(Ticket.severity)
|
||||||
|
.all()
|
||||||
|
)
|
||||||
|
|
||||||
|
# Counts by status
|
||||||
|
status_counts = dict(
|
||||||
|
db.query(Ticket.status, func.count(Ticket.id))
|
||||||
|
.group_by(Ticket.status)
|
||||||
|
.all()
|
||||||
|
)
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# Location Clustering
|
||||||
|
# ----------------------
|
||||||
|
# Simple grid-based clustering: round lat/lon to nearest cluster_size
|
||||||
|
tickets = db.query(Ticket.latitude, Ticket.longitude).all()
|
||||||
|
location_clusters: Dict[str, int] = {}
|
||||||
|
for lat, lon in tickets:
|
||||||
|
key = f"{round(lat/cluster_size)*cluster_size:.4f},{round(lon/cluster_size)*cluster_size:.4f}"
|
||||||
|
location_clusters[key] = location_clusters.get(key, 0) + 1
|
||||||
|
|
||||||
|
return {
|
||||||
|
"total_tickets": total_tickets,
|
||||||
|
"category_counts": category_counts,
|
||||||
|
"severity_counts": {k.value: v for k, v in severity_counts.items()},
|
||||||
|
"status_counts": {k.value: v for k, v in status_counts.items()},
|
||||||
|
"location_clusters": location_clusters # format: "lat,lon": count
|
||||||
|
}
|
||||||
100
backend/app/routes/report.py
Normal file
100
backend/app/routes/report.py
Normal file
@@ -0,0 +1,100 @@
|
|||||||
|
from fastapi import APIRouter, UploadFile, File, Form, Depends, HTTPException
|
||||||
|
from fastapi.responses import JSONResponse
|
||||||
|
from sqlalchemy.orm import Session
|
||||||
|
from app.database import get_db
|
||||||
|
from app.services.ticket_service import TicketService, SeverityLevel
|
||||||
|
from app.models.ticket_model import User
|
||||||
|
from app.services.global_ai import get_ai_service
|
||||||
|
import os, uuid, logging
|
||||||
|
|
||||||
|
router = APIRouter()
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
logger.setLevel(logging.DEBUG)
|
||||||
|
|
||||||
|
UPLOAD_DIR = "app/static/uploads"
|
||||||
|
os.makedirs(UPLOAD_DIR, exist_ok=True)
|
||||||
|
|
||||||
|
@router.post("/report")
|
||||||
|
async def report_issue(
|
||||||
|
user_id: str = Form(...),
|
||||||
|
latitude: float = Form(...),
|
||||||
|
longitude: float = Form(...),
|
||||||
|
description: str = Form(""),
|
||||||
|
image: UploadFile = File(...),
|
||||||
|
db: Session = Depends(get_db)
|
||||||
|
):
|
||||||
|
logger.debug("Received report request")
|
||||||
|
ticket_service = TicketService(db)
|
||||||
|
|
||||||
|
# Validate user
|
||||||
|
user = db.query(User).filter(User.id == user_id).first()
|
||||||
|
if not user:
|
||||||
|
logger.error(f"User with id {user_id} not found")
|
||||||
|
raise HTTPException(status_code=404, detail=f"User with id {user_id} not found")
|
||||||
|
logger.debug(f"User found: {user.name} ({user.email})")
|
||||||
|
|
||||||
|
# Save uploaded image
|
||||||
|
file_ext = os.path.splitext(image.filename)[1]
|
||||||
|
filename = f"{uuid.uuid4()}{file_ext}"
|
||||||
|
file_path = os.path.join(UPLOAD_DIR, filename)
|
||||||
|
try:
|
||||||
|
content = await image.read()
|
||||||
|
with open(file_path, "wb") as f:
|
||||||
|
f.write(content)
|
||||||
|
logger.debug(f"Saved image to {file_path} ({len(content)} bytes)")
|
||||||
|
except Exception as e:
|
||||||
|
logger.exception("Failed to save uploaded image")
|
||||||
|
raise HTTPException(status_code=500, detail="Failed to save uploaded image")
|
||||||
|
|
||||||
|
# Get initialized AI service
|
||||||
|
ai_service = get_ai_service()
|
||||||
|
logger.debug("AI service ready")
|
||||||
|
|
||||||
|
# Run AI predictions
|
||||||
|
try:
|
||||||
|
category = ai_service.classify_category(file_path)
|
||||||
|
logger.debug(f"Classification: {category}")
|
||||||
|
|
||||||
|
if category.lower() == "pothole":
|
||||||
|
severity_str, annotated_path = ai_service.detect_pothole_severity(file_path)
|
||||||
|
logger.debug(f"Detection: severity={severity_str}, path={annotated_path}")
|
||||||
|
severity = {
|
||||||
|
"High": SeverityLevel.HIGH,
|
||||||
|
"Medium": SeverityLevel.MEDIUM,
|
||||||
|
"Low": SeverityLevel.LOW,
|
||||||
|
"Unknown": SeverityLevel.NA
|
||||||
|
}.get(severity_str, SeverityLevel.NA)
|
||||||
|
else:
|
||||||
|
severity = SeverityLevel.NA
|
||||||
|
logger.debug("No detection needed")
|
||||||
|
except Exception as e:
|
||||||
|
logger.exception("AI prediction failed")
|
||||||
|
category = "Unknown"
|
||||||
|
severity = SeverityLevel.NA
|
||||||
|
|
||||||
|
# Create ticket
|
||||||
|
ticket = ticket_service.create_ticket(
|
||||||
|
user_id=user.id,
|
||||||
|
image_path=file_path,
|
||||||
|
category=category,
|
||||||
|
severity=severity,
|
||||||
|
latitude=latitude,
|
||||||
|
longitude=longitude,
|
||||||
|
description=description
|
||||||
|
)
|
||||||
|
logger.info(f"Ticket created: {ticket.id} for user {user.id}")
|
||||||
|
|
||||||
|
response = {
|
||||||
|
"ticket_id": ticket.id,
|
||||||
|
"user_id": user.id,
|
||||||
|
"user_name": user.name,
|
||||||
|
"user_email": user.email,
|
||||||
|
"category": ticket.category,
|
||||||
|
"severity": ticket.severity.value,
|
||||||
|
"status": ticket.status.value,
|
||||||
|
"description": ticket.description,
|
||||||
|
"image_path": ticket.image_path
|
||||||
|
}
|
||||||
|
|
||||||
|
logger.debug(f"Response: {response}")
|
||||||
|
return JSONResponse(status_code=201, content=response)
|
||||||
96
backend/app/routes/tickets.py
Normal file
96
backend/app/routes/tickets.py
Normal file
@@ -0,0 +1,96 @@
|
|||||||
|
# app/routes/tickets.py
|
||||||
|
from typing import Optional, List
|
||||||
|
import logging
|
||||||
|
from fastapi import APIRouter, Depends, HTTPException, Query
|
||||||
|
from sqlalchemy.orm import Session
|
||||||
|
from app.database import get_db
|
||||||
|
from app.services.ticket_service import TicketService, TicketStatus, SeverityLevel
|
||||||
|
from pydantic import BaseModel
|
||||||
|
|
||||||
|
router = APIRouter()
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
logging.basicConfig(level=logging.INFO)
|
||||||
|
|
||||||
|
class TicketStatusUpdate(BaseModel):
|
||||||
|
new_status: TicketStatus
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# GET /tickets
|
||||||
|
# ----------------------
|
||||||
|
@router.get("/tickets", response_model=List[dict])
|
||||||
|
def list_tickets(
|
||||||
|
user_id: Optional[str] = Query(None, description="Filter by user ID"),
|
||||||
|
category: Optional[str] = Query(None, description="Filter by category"),
|
||||||
|
severity: Optional[SeverityLevel] = Query(None, description="Filter by severity"),
|
||||||
|
status: Optional[TicketStatus] = Query(None, description="Filter by status"),
|
||||||
|
db: Session = Depends(get_db)
|
||||||
|
):
|
||||||
|
service = TicketService(db)
|
||||||
|
tickets = service.list_tickets(user_id=user_id, category=category, severity=severity, status=status)
|
||||||
|
return [
|
||||||
|
{
|
||||||
|
"ticket_id": t.id,
|
||||||
|
"user_id": t.user_id,
|
||||||
|
"category": t.category,
|
||||||
|
"severity": t.severity.value,
|
||||||
|
"status": t.status.value,
|
||||||
|
"description": t.description,
|
||||||
|
"latitude": t.latitude,
|
||||||
|
"longitude": t.longitude,
|
||||||
|
"image_path": t.image_path,
|
||||||
|
"created_at": t.created_at,
|
||||||
|
"updated_at": t.updated_at
|
||||||
|
} for t in tickets
|
||||||
|
]
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# GET /tickets/{ticket_id}
|
||||||
|
# ----------------------
|
||||||
|
@router.get("/tickets/{ticket_id}", response_model=dict)
|
||||||
|
def get_ticket(ticket_id: str, db: Session = Depends(get_db)):
|
||||||
|
service = TicketService(db)
|
||||||
|
ticket = service.get_ticket(ticket_id)
|
||||||
|
if not ticket:
|
||||||
|
raise HTTPException(status_code=404, detail=f"Ticket {ticket_id} not found")
|
||||||
|
return {
|
||||||
|
"ticket_id": ticket.id,
|
||||||
|
"user_id": ticket.user_id,
|
||||||
|
"category": ticket.category,
|
||||||
|
"severity": ticket.severity.value,
|
||||||
|
"status": ticket.status.value,
|
||||||
|
"description": ticket.description,
|
||||||
|
"latitude": ticket.latitude,
|
||||||
|
"longitude": ticket.longitude,
|
||||||
|
"image_path": ticket.image_path,
|
||||||
|
"created_at": ticket.created_at,
|
||||||
|
"updated_at": ticket.updated_at
|
||||||
|
}
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# PATCH /tickets/{ticket_id} - Update status
|
||||||
|
# ----------------------
|
||||||
|
@router.patch("/tickets/{ticket_id}", response_model=dict)
|
||||||
|
def update_ticket_status(
|
||||||
|
ticket_id: str,
|
||||||
|
status_update: TicketStatusUpdate, # JSON body with new_status
|
||||||
|
db: Session = Depends(get_db)
|
||||||
|
):
|
||||||
|
service = TicketService(db)
|
||||||
|
try:
|
||||||
|
ticket = service.update_ticket_status(ticket_id, status_update.new_status)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to update ticket status: {e}")
|
||||||
|
raise HTTPException(status_code=400, detail=str(e))
|
||||||
|
return {
|
||||||
|
"ticket_id": ticket.id,
|
||||||
|
"user_id": ticket.user_id,
|
||||||
|
"category": ticket.category,
|
||||||
|
"severity": ticket.severity.value,
|
||||||
|
"status": ticket.status.value,
|
||||||
|
"description": ticket.description,
|
||||||
|
"latitude": ticket.latitude,
|
||||||
|
"longitude": ticket.longitude,
|
||||||
|
"image_path": ticket.image_path,
|
||||||
|
"created_at": ticket.created_at,
|
||||||
|
"updated_at": ticket.updated_at
|
||||||
|
}
|
||||||
18
backend/app/routes/users.py
Normal file
18
backend/app/routes/users.py
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
# app/routes/users.py
|
||||||
|
from fastapi import APIRouter, Depends, HTTPException
|
||||||
|
from sqlalchemy.orm import Session
|
||||||
|
from app.database import get_db
|
||||||
|
from app.services.ticket_service import TicketService
|
||||||
|
from app.models.ticket_model import User
|
||||||
|
from app.schemas.user_schema import UserCreate # import schema
|
||||||
|
|
||||||
|
router = APIRouter()
|
||||||
|
|
||||||
|
@router.post("/users")
|
||||||
|
def create_user(user: UserCreate, db: Session = Depends(get_db)):
|
||||||
|
service = TicketService(db)
|
||||||
|
existing_user = db.query(User).filter(User.email == user.email).first()
|
||||||
|
if existing_user:
|
||||||
|
raise HTTPException(status_code=400, detail="User with this email already exists")
|
||||||
|
new_user = service.create_user(user.name, user.email)
|
||||||
|
return {"id": new_user.id, "name": new_user.name, "email": new_user.email}
|
||||||
6
backend/app/schemas/user_schema.py
Normal file
6
backend/app/schemas/user_schema.py
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
# app/schemas/user_schema.py
|
||||||
|
from pydantic import BaseModel, EmailStr
|
||||||
|
|
||||||
|
class UserCreate(BaseModel):
|
||||||
|
name: str
|
||||||
|
email: EmailStr
|
||||||
138
backend/app/services/ai_service.py
Normal file
138
backend/app/services/ai_service.py
Normal file
@@ -0,0 +1,138 @@
|
|||||||
|
import os
|
||||||
|
import logging
|
||||||
|
from typing import Tuple
|
||||||
|
import torch
|
||||||
|
from torchvision import transforms, models
|
||||||
|
from PIL import Image
|
||||||
|
import cv2
|
||||||
|
from ultralytics import YOLO
|
||||||
|
import json
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
logger.setLevel(logging.INFO)
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# AI Model Manager
|
||||||
|
# ----------------------
|
||||||
|
class AIModelManager:
|
||||||
|
"""Loads and keeps classification and detection models in memory."""
|
||||||
|
def __init__(self, device: str = None):
|
||||||
|
self.device = torch.device(device or ("cuda" if torch.cuda.is_available() else "cpu"))
|
||||||
|
|
||||||
|
# Compute relative paths
|
||||||
|
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||||
|
self.class_model_path = os.path.join(BASE_DIR, "models", "classification", "best_model.pth")
|
||||||
|
self.class_mapping_path = os.path.join(BASE_DIR, "models", "classification", "class_mapping.json")
|
||||||
|
self.detection_model_path = os.path.join(BASE_DIR, "models", "detection", "best_severity_check.pt")
|
||||||
|
|
||||||
|
|
||||||
|
# Initialize models
|
||||||
|
self.class_model = None
|
||||||
|
self.class_names = None
|
||||||
|
self._load_classification_model()
|
||||||
|
self.detection_model = None
|
||||||
|
self._load_detection_model()
|
||||||
|
|
||||||
|
# Preprocess for classification
|
||||||
|
self.preprocess = transforms.Compose([
|
||||||
|
transforms.Resize((224, 224)),
|
||||||
|
transforms.ToTensor()
|
||||||
|
])
|
||||||
|
|
||||||
|
def _load_classification_model(self):
|
||||||
|
logger.info("Loading classification model...")
|
||||||
|
with open(self.class_mapping_path, "r") as f:
|
||||||
|
class_mapping = json.load(f)
|
||||||
|
self.class_names = [class_mapping[str(i)] for i in range(len(class_mapping))]
|
||||||
|
|
||||||
|
self.class_model = models.resnet18(weights=None)
|
||||||
|
self.class_model.fc = torch.nn.Linear(self.class_model.fc.in_features, len(self.class_names))
|
||||||
|
state_dict = torch.load(self.class_model_path, map_location=self.device)
|
||||||
|
self.class_model.load_state_dict(state_dict)
|
||||||
|
self.class_model.to(self.device)
|
||||||
|
self.class_model.eval()
|
||||||
|
logger.info("Classification model loaded successfully.")
|
||||||
|
|
||||||
|
def _load_detection_model(self):
|
||||||
|
logger.info("Loading YOLO detection model...")
|
||||||
|
self.detection_model = YOLO(self.detection_model_path)
|
||||||
|
logger.info("YOLO detection model loaded successfully.")
|
||||||
|
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# AI Service
|
||||||
|
# ----------------------
|
||||||
|
class AIService:
|
||||||
|
"""Handles classification and detection using preloaded models."""
|
||||||
|
def __init__(self, model_manager: AIModelManager):
|
||||||
|
self.models = model_manager
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# Classification
|
||||||
|
# ----------------------
|
||||||
|
def classify_category(self, image_path: str) -> str:
|
||||||
|
image = Image.open(image_path).convert("RGB")
|
||||||
|
input_tensor = self.models.preprocess(image).unsqueeze(0).to(self.models.device)
|
||||||
|
with torch.no_grad():
|
||||||
|
outputs = self.models.class_model(input_tensor)
|
||||||
|
_, predicted = torch.max(outputs, 1)
|
||||||
|
category = self.models.class_names[predicted.item()]
|
||||||
|
logger.info(f"Image '{image_path}' classified as '{category}'.")
|
||||||
|
return category
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# Detection / Severity
|
||||||
|
# ----------------------
|
||||||
|
@staticmethod
|
||||||
|
def classify_severity(box: Tuple[int, int, int, int], image_height: int) -> str:
|
||||||
|
x1, y1, x2, y2 = box
|
||||||
|
area = (x2 - x1) * (y2 - y1)
|
||||||
|
if area > 50000 or y2 > image_height * 0.75:
|
||||||
|
return "High"
|
||||||
|
elif area > 20000 or y2 > image_height * 0.5:
|
||||||
|
return "Medium"
|
||||||
|
else:
|
||||||
|
return "Low"
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def draw_boxes_and_severity(image, results) -> None:
|
||||||
|
for r in results:
|
||||||
|
for box in r.boxes.xyxy:
|
||||||
|
x1, y1, x2, y2 = map(int, box.cpu().numpy())
|
||||||
|
conf = float(r.boxes.conf[0]) if hasattr(r.boxes, "conf") else 0.0
|
||||||
|
severity = AIService.classify_severity((x1, y1, x2, y2), image.shape[0])
|
||||||
|
color = (0, 255, 0) if severity == "Low" else (0, 255, 255) if severity == "Medium" else (0, 0, 255)
|
||||||
|
cv2.rectangle(image, (x1, y1), (x2, y2), color, 2)
|
||||||
|
cv2.putText(image, f"{severity} ({conf:.2f})", (x1, y1 - 10),
|
||||||
|
cv2.FONT_HERSHEY_SIMPLEX, 0.6, color, 2)
|
||||||
|
|
||||||
|
def detect_pothole_severity(self, image_path: str, output_path: str = None) -> Tuple[str, str]:
|
||||||
|
image = cv2.imread(image_path)
|
||||||
|
results = self.models.detection_model(image)
|
||||||
|
self.draw_boxes_and_severity(image, results)
|
||||||
|
|
||||||
|
# Determine highest severity
|
||||||
|
severities = []
|
||||||
|
for r in results:
|
||||||
|
for box in r.boxes.xyxy:
|
||||||
|
severities.append(self.classify_severity(map(int, box.cpu().numpy()), image.shape[0]))
|
||||||
|
|
||||||
|
if severities:
|
||||||
|
if "High" in severities:
|
||||||
|
severity = "High"
|
||||||
|
elif "Medium" in severities:
|
||||||
|
severity = "Medium"
|
||||||
|
else:
|
||||||
|
severity = "Low"
|
||||||
|
else:
|
||||||
|
severity = "Unknown"
|
||||||
|
|
||||||
|
# Save annotated image
|
||||||
|
if output_path:
|
||||||
|
os.makedirs(os.path.dirname(output_path), exist_ok=True)
|
||||||
|
cv2.imwrite(output_path, image)
|
||||||
|
else:
|
||||||
|
output_path = image_path
|
||||||
|
|
||||||
|
logger.info(f"Pothole severity: {severity}, output image saved to '{output_path}'.")
|
||||||
|
return severity, output_path
|
||||||
43
backend/app/services/global_ai.py
Normal file
43
backend/app/services/global_ai.py
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
import os
|
||||||
|
from app.services.ai_service import AIModelManager, AIService
|
||||||
|
import logging
|
||||||
|
import random
|
||||||
|
from typing import Tuple
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
logger.setLevel(logging.DEBUG)
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# Lazy-initialized AI service
|
||||||
|
# ----------------------
|
||||||
|
_ai_service: AIService = None
|
||||||
|
|
||||||
|
def init_ai_service() -> AIService:
|
||||||
|
"""Initializes the AI service if not already initialized."""
|
||||||
|
global _ai_service
|
||||||
|
if _ai_service is None:
|
||||||
|
logger.debug("Initializing AI service...")
|
||||||
|
try:
|
||||||
|
model_manager = AIModelManager()
|
||||||
|
_ai_service = AIService(model_manager)
|
||||||
|
logger.info("AI service ready.")
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"Failed to initialize AI service: {e}. Using mock service.")
|
||||||
|
# Create a mock AI service for now
|
||||||
|
_ai_service = MockAIService()
|
||||||
|
return _ai_service
|
||||||
|
|
||||||
|
def get_ai_service() -> AIService:
|
||||||
|
"""Returns the initialized AI service."""
|
||||||
|
return init_ai_service()
|
||||||
|
|
||||||
|
# Mock AI service for testing when models can't be loaded
|
||||||
|
class MockAIService:
|
||||||
|
def classify_category(self, image_path: str) -> str:
|
||||||
|
categories = ["pothole", "streetlight", "garbage", "signage", "drainage", "other"]
|
||||||
|
return random.choice(categories)
|
||||||
|
|
||||||
|
def detect_pothole_severity(self, image_path: str) -> Tuple[str, str]:
|
||||||
|
severities = ["High", "Medium", "Low"]
|
||||||
|
severity = random.choice(severities)
|
||||||
|
return severity, image_path # Return same path as annotated path
|
||||||
103
backend/app/services/ticket_service.py
Normal file
103
backend/app/services/ticket_service.py
Normal file
@@ -0,0 +1,103 @@
|
|||||||
|
# app/services/ticket_service.py
|
||||||
|
import uuid
|
||||||
|
from typing import List, Optional
|
||||||
|
from sqlalchemy.orm import Session
|
||||||
|
from sqlalchemy.exc import NoResultFound
|
||||||
|
from app.models.ticket_model import User, Ticket, TicketAudit, TicketStatus, SeverityLevel
|
||||||
|
import logging
|
||||||
|
|
||||||
|
logging.basicConfig(level=logging.INFO)
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# Ticket Service
|
||||||
|
# ----------------------
|
||||||
|
class TicketService:
|
||||||
|
def __init__(self, db: Session):
|
||||||
|
self.db = db
|
||||||
|
|
||||||
|
# ------------------
|
||||||
|
# User Operations
|
||||||
|
# ------------------
|
||||||
|
def create_user(self, name: str, email: str) -> User:
|
||||||
|
user = User(name=name, email=email)
|
||||||
|
self.db.add(user)
|
||||||
|
self.db.commit()
|
||||||
|
self.db.refresh(user)
|
||||||
|
logger.info(f"Created user {user}")
|
||||||
|
return user # <-- return User object
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def get_user(self, user_id: str) -> Optional[User]:
|
||||||
|
return self.db.query(User).filter(User.id == user_id).first()
|
||||||
|
|
||||||
|
# ------------------
|
||||||
|
# Ticket Operations
|
||||||
|
# ------------------
|
||||||
|
def create_ticket(
|
||||||
|
self,
|
||||||
|
user_id: str,
|
||||||
|
image_path: str,
|
||||||
|
category: str,
|
||||||
|
severity: SeverityLevel,
|
||||||
|
latitude: float,
|
||||||
|
longitude: float,
|
||||||
|
description: str = "",
|
||||||
|
) -> Ticket:
|
||||||
|
ticket = Ticket(
|
||||||
|
id=str(uuid.uuid4()),
|
||||||
|
user_id=user_id,
|
||||||
|
image_path=image_path,
|
||||||
|
category=category,
|
||||||
|
severity=severity,
|
||||||
|
latitude=latitude,
|
||||||
|
longitude=longitude,
|
||||||
|
description=description,
|
||||||
|
)
|
||||||
|
self.db.add(ticket)
|
||||||
|
self.db.commit()
|
||||||
|
self.db.refresh(ticket)
|
||||||
|
logger.info(f"Created ticket {ticket}")
|
||||||
|
return ticket
|
||||||
|
|
||||||
|
def update_ticket_status(self, ticket_id: str, new_status: TicketStatus) -> Ticket:
|
||||||
|
ticket = self.db.query(Ticket).filter(Ticket.id == ticket_id).first()
|
||||||
|
if not ticket:
|
||||||
|
raise NoResultFound(f"Ticket with id {ticket_id} not found")
|
||||||
|
|
||||||
|
# Log audit
|
||||||
|
audit = TicketAudit(
|
||||||
|
ticket_id=ticket.id,
|
||||||
|
old_status=ticket.status,
|
||||||
|
new_status=new_status,
|
||||||
|
)
|
||||||
|
self.db.add(audit)
|
||||||
|
|
||||||
|
# Update status
|
||||||
|
ticket.status = new_status
|
||||||
|
self.db.commit()
|
||||||
|
self.db.refresh(ticket)
|
||||||
|
logger.info(f"Updated ticket {ticket.id} status to {new_status}")
|
||||||
|
return ticket
|
||||||
|
|
||||||
|
def get_ticket(self, ticket_id: str) -> Optional[Ticket]:
|
||||||
|
return self.db.query(Ticket).filter(Ticket.id == ticket_id).first()
|
||||||
|
|
||||||
|
def list_tickets(
|
||||||
|
self,
|
||||||
|
user_id: Optional[str] = None,
|
||||||
|
category: Optional[str] = None,
|
||||||
|
severity: Optional[SeverityLevel] = None,
|
||||||
|
status: Optional[TicketStatus] = None
|
||||||
|
) -> List[Ticket]:
|
||||||
|
query = self.db.query(Ticket)
|
||||||
|
if user_id:
|
||||||
|
query = query.filter(Ticket.user_id == user_id)
|
||||||
|
if category:
|
||||||
|
query = query.filter(Ticket.category == category)
|
||||||
|
if severity:
|
||||||
|
query = query.filter(Ticket.severity == severity)
|
||||||
|
if status:
|
||||||
|
query = query.filter(Ticket.status == status)
|
||||||
|
return query.order_by(Ticket.created_at.desc()).all()
|
||||||
@@ -0,0 +1,74 @@
|
|||||||
|
import uuid
|
||||||
|
from sqlalchemy import Column, String, Float, Enum, DateTime, ForeignKey, Index
|
||||||
|
from sqlalchemy.orm import relationship
|
||||||
|
from sqlalchemy.sql import func
|
||||||
|
from app.database import Base
|
||||||
|
import enum
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# Enums
|
||||||
|
# ----------------------
|
||||||
|
class TicketStatus(str, enum.Enum):
|
||||||
|
NEW = "New"
|
||||||
|
IN_PROGRESS = "In Progress"
|
||||||
|
FIXED = "Fixed"
|
||||||
|
|
||||||
|
class SeverityLevel(str, enum.Enum):
|
||||||
|
LOW = "Low"
|
||||||
|
MEDIUM = "Medium"
|
||||||
|
HIGH = "High"
|
||||||
|
NA = "N/A"
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# User Model
|
||||||
|
# ----------------------
|
||||||
|
class User(Base):
|
||||||
|
__tablename__ = "users"
|
||||||
|
|
||||||
|
id = Column(String, primary_key=True, default=lambda: str(uuid.uuid4()), index=True)
|
||||||
|
name = Column(String, nullable=False)
|
||||||
|
email = Column(String, unique=True, nullable=False)
|
||||||
|
|
||||||
|
tickets = relationship("Ticket", back_populates="user", cascade="all, delete-orphan")
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return f"<User(id={self.id}, name={self.name}, email={self.email})>"
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# Ticket Model
|
||||||
|
# ----------------------
|
||||||
|
class Ticket(Base):
|
||||||
|
__tablename__ = "tickets"
|
||||||
|
|
||||||
|
id = Column(String, primary_key=True, default=lambda: str(uuid.uuid4()), index=True)
|
||||||
|
user_id = Column(String, ForeignKey("users.id", ondelete="CASCADE"), nullable=False)
|
||||||
|
image_path = Column(String, nullable=False)
|
||||||
|
category = Column(String, nullable=False)
|
||||||
|
severity = Column(Enum(SeverityLevel), nullable=False, default=SeverityLevel.NA)
|
||||||
|
description = Column(String, default="")
|
||||||
|
status = Column(Enum(TicketStatus), nullable=False, default=TicketStatus.NEW)
|
||||||
|
latitude = Column(Float, nullable=False)
|
||||||
|
longitude = Column(Float, nullable=False)
|
||||||
|
created_at = Column(DateTime(timezone=True), server_default=func.now())
|
||||||
|
updated_at = Column(DateTime(timezone=True), server_default=func.now(), onupdate=func.now())
|
||||||
|
|
||||||
|
user = relationship("User", back_populates="tickets")
|
||||||
|
|
||||||
|
__table_args__ = (
|
||||||
|
Index("idx_category_status", "category", "status"),
|
||||||
|
)
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return f"<Ticket(id={self.id}, category={self.category}, severity={self.severity}, status={self.status}, user_id={self.user_id})>"
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# Ticket Audit Model
|
||||||
|
# ----------------------
|
||||||
|
class TicketAudit(Base):
|
||||||
|
__tablename__ = "ticket_audit"
|
||||||
|
|
||||||
|
id = Column(String, primary_key=True, default=lambda: str(uuid.uuid4()))
|
||||||
|
ticket_id = Column(String, ForeignKey("tickets.id", ondelete="CASCADE"))
|
||||||
|
old_status = Column(Enum(TicketStatus))
|
||||||
|
new_status = Column(Enum(TicketStatus))
|
||||||
|
updated_at = Column(DateTime(timezone=True), server_default=func.now())
|
||||||
26947
backend/get-pip.py
Normal file
26947
backend/get-pip.py
Normal file
File diff suppressed because it is too large
Load Diff
72
backend/main.py
Normal file
72
backend/main.py
Normal file
@@ -0,0 +1,72 @@
|
|||||||
|
import os
|
||||||
|
import logging
|
||||||
|
from contextlib import asynccontextmanager
|
||||||
|
from fastapi import FastAPI
|
||||||
|
from fastapi.staticfiles import StaticFiles
|
||||||
|
from app.database import Base, engine
|
||||||
|
from app.routes import report, tickets, analytics, users
|
||||||
|
from app.services.global_ai import init_ai_service
|
||||||
|
|
||||||
|
logging.basicConfig(level=logging.DEBUG)
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# Lifespan context for startup/shutdown
|
||||||
|
# ----------------------
|
||||||
|
@asynccontextmanager
|
||||||
|
async def lifespan(app: FastAPI):
|
||||||
|
logger.info("Starting FixMate Backend...")
|
||||||
|
init_ai_service() # ✅ Models load once here
|
||||||
|
logger.info("AI models loaded successfully.")
|
||||||
|
yield
|
||||||
|
logger.info("FixMate Backend shutting down...")
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# Initialize FastAPI
|
||||||
|
# ----------------------
|
||||||
|
app = FastAPI(
|
||||||
|
title="FixMate Backend API",
|
||||||
|
description="Backend for FixMate Hackathon Prototype",
|
||||||
|
version="1.0.0",
|
||||||
|
lifespan=lifespan
|
||||||
|
)
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# Initialize DB
|
||||||
|
# ----------------------
|
||||||
|
Base.metadata.create_all(bind=engine)
|
||||||
|
logger.info("Database initialized.")
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# Static files
|
||||||
|
# ----------------------
|
||||||
|
UPLOAD_DIR = "static/uploads"
|
||||||
|
os.makedirs(UPLOAD_DIR, exist_ok=True)
|
||||||
|
app.mount("/static", StaticFiles(directory="static"), name="static")
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# Include routers
|
||||||
|
# ----------------------
|
||||||
|
try:
|
||||||
|
app.include_router(report.router, prefix="/api", tags=["Report"])
|
||||||
|
app.include_router(tickets.router, prefix="/api", tags=["Tickets"])
|
||||||
|
app.include_router(analytics.router, prefix="/api", tags=["Analytics"])
|
||||||
|
app.include_router(users.router, prefix="/api", tags=["Users"])
|
||||||
|
print("✅ All routers included successfully")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"❌ Error including routers: {e}")
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
|
||||||
|
@app.get("/")
|
||||||
|
def root():
|
||||||
|
return {"message": "Welcome to FixMate Backend API! Visit /docs for API documentation."}
|
||||||
|
|
||||||
|
print("✅ FastAPI server setup complete")
|
||||||
|
|
||||||
|
# Start the server when running this script directly
|
||||||
|
if __name__ == "__main__":
|
||||||
|
import uvicorn
|
||||||
|
print("🚀 Starting server on http://127.0.0.1:8000")
|
||||||
|
print("📚 API documentation available at http://127.0.0.1:8000/docs")
|
||||||
|
uvicorn.run(app, host="127.0.0.1", port=8000)
|
||||||
126
backend/plan.md
Normal file
126
backend/plan.md
Normal file
@@ -0,0 +1,126 @@
|
|||||||
|
Perfect 👍 Before I drop a full codebase, let’s agree on the **flow + plan** so you (and your team) know exactly how things connect. This will also make sure the code I give matches your hackathon goals.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# ⚡ Backend Flow (FixMate Local Prototype)
|
||||||
|
|
||||||
|
### 1. Citizen Upload Flow
|
||||||
|
|
||||||
|
1. **Citizen uses frontend** (simple React, Streamlit, or Swagger UI for now).
|
||||||
|
2. Submits:
|
||||||
|
|
||||||
|
* Photo (issue picture).
|
||||||
|
* GPS location (lat/lng) (either auto from frontend or manually typed for demo).
|
||||||
|
* Optional notes.
|
||||||
|
3. Backend endpoint: `POST /report`
|
||||||
|
|
||||||
|
* Saves photo → `./app/static/uploads/`
|
||||||
|
* Runs **AI classification** (via YOLOv8 model from Hugging Face).
|
||||||
|
* Runs **severity logic** (based on bounding box size / confidence).
|
||||||
|
* Generates **ticket record** in DB.
|
||||||
|
* Returns JSON: `{id, category, severity, status, description}`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 2. AI Model Flow
|
||||||
|
|
||||||
|
* First time backend runs:
|
||||||
|
|
||||||
|
* Check if `models/` folder exists. If not, create it.
|
||||||
|
* Use **`hf_hub_download`** to fetch YOLOv8n weights into `./models/`.
|
||||||
|
* Load the model from that path with `ultralytics.YOLO`.
|
||||||
|
* Every report:
|
||||||
|
|
||||||
|
* Pass image to model → detect objects.
|
||||||
|
* Map objects to FixMate categories (`pothole`, `streetlight`, `trash`, `signage`).
|
||||||
|
* Apply **severity scoring** (e.g. bounding box area = High if > certain %).
|
||||||
|
* If model fails (no internet, missing weights):
|
||||||
|
|
||||||
|
* Use fallback heuristic (OpenCV contour/brightness detection).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 3. Ticket Lifecycle Flow
|
||||||
|
|
||||||
|
* Ticket schema:
|
||||||
|
|
||||||
|
```
|
||||||
|
id, image_path, category, severity, location, description, status, timestamps
|
||||||
|
```
|
||||||
|
* Default status = `"New"`.
|
||||||
|
* Admin dashboard endpoints:
|
||||||
|
|
||||||
|
* `GET /tickets` → list all tickets.
|
||||||
|
* `GET /tickets/{id}` → fetch ticket details.
|
||||||
|
* `PATCH /tickets/{id}` → update status (`In Progress`, `Fixed`).
|
||||||
|
* Citizens can query:
|
||||||
|
|
||||||
|
* `GET /status/{id}` → see ticket’s status.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 4. Dashboard & Analytics Flow
|
||||||
|
|
||||||
|
* Admin UI (or Swagger demo) calls:
|
||||||
|
|
||||||
|
* `/tickets` → display list or map markers.
|
||||||
|
* `/analytics` → simple stats:
|
||||||
|
|
||||||
|
* Total tickets.
|
||||||
|
* Counts by category & severity.
|
||||||
|
* (Optional) Location clustering for hotspots.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# 🛠️ Development Plan
|
||||||
|
|
||||||
|
### Step 1 – Environment & Repo
|
||||||
|
|
||||||
|
* Conda or venv, install dependencies (FastAPI, SQLAlchemy, ultralytics, huggingface\_hub).
|
||||||
|
* Initialize Git repo with `.gitignore`, `requirements.txt`.
|
||||||
|
|
||||||
|
### Step 2 – Database & Models
|
||||||
|
|
||||||
|
* SQLite with SQLAlchemy ORM.
|
||||||
|
* `Ticket` model with enum fields for severity + status.
|
||||||
|
|
||||||
|
### Step 3 – AI Service
|
||||||
|
|
||||||
|
* `ai_service.py` handles:
|
||||||
|
|
||||||
|
* Ensure `models/` exists.
|
||||||
|
* Download YOLOv8 from Hugging Face into `./models/`.
|
||||||
|
* Load model.
|
||||||
|
* `detect_issue(image_path)` returns `{category, severity, confidence}`.
|
||||||
|
|
||||||
|
### Step 4 – Ticket Service
|
||||||
|
|
||||||
|
* Saves image locally.
|
||||||
|
* Calls `ai_service.detect_issue()`.
|
||||||
|
* Creates DB record.
|
||||||
|
|
||||||
|
### Step 5 – API Routes
|
||||||
|
|
||||||
|
* `/report` → citizen upload.
|
||||||
|
* `/tickets` → list all tickets.
|
||||||
|
* `/tickets/{id}` → fetch ticket.
|
||||||
|
* `/tickets/{id}` PATCH → update status.
|
||||||
|
* `/analytics` → summary stats.
|
||||||
|
|
||||||
|
### Step 6 – Demo Prep
|
||||||
|
|
||||||
|
* Populate DB with some sample tickets.
|
||||||
|
* Upload a few pothole/streetlight images → verify classification.
|
||||||
|
* Test via Swagger UI at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
✅ With this flow, you’ll have a **complete hackathon backend** that:
|
||||||
|
|
||||||
|
* Works offline after first model download.
|
||||||
|
* Saves everything locally (SQLite + images).
|
||||||
|
* Provides APIs ready for a frontend dashboard.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
👉 Do you want me to now **rewrite the backend code** with this Hugging Face + `models/` folder integration (full project files), so you can just copy-paste into your repo and run?
|
||||||
16
backend/requirements.txt
Normal file
16
backend/requirements.txt
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
fastapi
|
||||||
|
uvicorn
|
||||||
|
sqlalchemy
|
||||||
|
sqlite-utils
|
||||||
|
ultralytics
|
||||||
|
opencv-python
|
||||||
|
pillow
|
||||||
|
torch
|
||||||
|
torchvision
|
||||||
|
torchaudio
|
||||||
|
pytest
|
||||||
|
black
|
||||||
|
isort
|
||||||
|
huggingface_hub
|
||||||
|
datasets
|
||||||
|
transformers
|
||||||
33
backend/test/Machine_Learning/broken_street_light.py
Normal file
33
backend/test/Machine_Learning/broken_street_light.py
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
# from bing_image_downloader import downloader
|
||||||
|
|
||||||
|
# downloader.download(
|
||||||
|
# "broken streetlight",
|
||||||
|
# limit=100,
|
||||||
|
# output_dir='dataset_downloads',
|
||||||
|
# adult_filter_off=True,
|
||||||
|
# force_replace=False,
|
||||||
|
# timeout=60
|
||||||
|
# )
|
||||||
|
|
||||||
|
from bing_image_downloader import downloader
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
# ---------- CONFIG ----------
|
||||||
|
CLASS_NAME = "drainage"
|
||||||
|
LIMIT = 200 # number of images to download
|
||||||
|
OUTPUT_DIR = Path("dataset_downloads") # folder to store downloaded images
|
||||||
|
|
||||||
|
# Ensure the output directory exists
|
||||||
|
OUTPUT_DIR.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
# ---------- DOWNLOAD IMAGES ----------
|
||||||
|
downloader.download(
|
||||||
|
CLASS_NAME,
|
||||||
|
limit=LIMIT,
|
||||||
|
output_dir=str(OUTPUT_DIR),
|
||||||
|
adult_filter_off=True, # keep it safe
|
||||||
|
force_replace=False, # don't overwrite if already downloaded
|
||||||
|
timeout=60 # seconds per request
|
||||||
|
)
|
||||||
|
|
||||||
|
print(f"✅ Downloaded {LIMIT} images for class '{CLASS_NAME}' in '{OUTPUT_DIR}'")
|
||||||
92
backend/test/Machine_Learning/fetch_datasets.py
Normal file
92
backend/test/Machine_Learning/fetch_datasets.py
Normal file
@@ -0,0 +1,92 @@
|
|||||||
|
import os
|
||||||
|
import zipfile
|
||||||
|
import shutil
|
||||||
|
import random
|
||||||
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
# ---------- CONFIG ----------
|
||||||
|
BASE_DIR = Path("dataset")
|
||||||
|
DOWNLOAD_DIR = Path("downloads")
|
||||||
|
CLASSES = ["pothole", "streetlight", "garbage", "signage"]
|
||||||
|
TRAIN_SPLIT = 0.8 # 80% train, 20% val
|
||||||
|
|
||||||
|
os.makedirs(BASE_DIR, exist_ok=True)
|
||||||
|
os.makedirs(DOWNLOAD_DIR, exist_ok=True)
|
||||||
|
|
||||||
|
# Create folder structure
|
||||||
|
for split in ["train", "val"]:
|
||||||
|
for cls in CLASSES:
|
||||||
|
os.makedirs(BASE_DIR / split / cls, exist_ok=True)
|
||||||
|
|
||||||
|
# ---------- AUTHENTICATION ----------
|
||||||
|
def setup_kaggle_api():
|
||||||
|
"""Load kaggle.json and set environment variables"""
|
||||||
|
kaggle_path = Path("kaggle.json") # put kaggle.json in the same folder as this script
|
||||||
|
if not kaggle_path.exists():
|
||||||
|
raise FileNotFoundError("❌ kaggle.json not found! Download it from https://www.kaggle.com/settings")
|
||||||
|
|
||||||
|
with open(kaggle_path, "r") as f:
|
||||||
|
creds = json.load(f)
|
||||||
|
|
||||||
|
os.environ["KAGGLE_USERNAME"] = creds["username"]
|
||||||
|
os.environ["KAGGLE_KEY"] = creds["key"]
|
||||||
|
print("✅ Kaggle API credentials loaded.")
|
||||||
|
|
||||||
|
# ---------- HELPERS ----------
|
||||||
|
def unzip_and_move(zip_path, class_name):
|
||||||
|
"""Unzip dataset and put images into dataset/train/ & val/ folders"""
|
||||||
|
extract_path = Path("tmp_extract")
|
||||||
|
with zipfile.ZipFile(zip_path, 'r') as zip_ref:
|
||||||
|
zip_ref.extractall(extract_path)
|
||||||
|
|
||||||
|
# Collect images
|
||||||
|
all_images = list(extract_path.rglob("*.jpg")) + list(extract_path.rglob("*.png")) + list(extract_path.rglob("*.jpeg"))
|
||||||
|
random.shuffle(all_images)
|
||||||
|
|
||||||
|
# Train/Val split
|
||||||
|
split_idx = int(len(all_images) * TRAIN_SPLIT)
|
||||||
|
train_files = all_images[:split_idx]
|
||||||
|
val_files = all_images[split_idx:]
|
||||||
|
|
||||||
|
for img in train_files:
|
||||||
|
target = BASE_DIR / "train" / class_name / img.name
|
||||||
|
shutil.move(str(img), target)
|
||||||
|
|
||||||
|
for img in val_files:
|
||||||
|
target = BASE_DIR / "val" / class_name / img.name
|
||||||
|
shutil.move(str(img), target)
|
||||||
|
|
||||||
|
shutil.rmtree(extract_path)
|
||||||
|
|
||||||
|
def kaggle_download(dataset_slug, out_zip):
|
||||||
|
"""Download Kaggle dataset into downloads/ folder"""
|
||||||
|
os.system(f'kaggle datasets download -d {dataset_slug} -p {DOWNLOAD_DIR} -o')
|
||||||
|
return DOWNLOAD_DIR / out_zip
|
||||||
|
|
||||||
|
# ---------- MAIN ----------
|
||||||
|
if __name__ == "__main__":
|
||||||
|
setup_kaggle_api()
|
||||||
|
|
||||||
|
# Pothole dataset
|
||||||
|
pothole_zip = kaggle_download("andrewmvd/pothole-detection", "pothole-detection.zip")
|
||||||
|
unzip_and_move(pothole_zip, "pothole")
|
||||||
|
|
||||||
|
# Garbage dataset
|
||||||
|
garbage_zip = kaggle_download("dataclusterlabs/domestic-trash-garbage-dataset", "domestic-trash-garbage-dataset.zip")
|
||||||
|
unzip_and_move(garbage_zip, "garbage")
|
||||||
|
|
||||||
|
# TrashNet (alternative garbage dataset)
|
||||||
|
trashnet_zip = kaggle_download("techsash/waste-classification-data", "waste-classification-data.zip")
|
||||||
|
unzip_and_move(trashnet_zip, "garbage")
|
||||||
|
|
||||||
|
# Signage dataset
|
||||||
|
signage_zip = kaggle_download("ahemateja19bec1025/traffic-sign-dataset-classification", "traffic-sign-dataset-classification.zip")
|
||||||
|
unzip_and_move(signage_zip, "signage") # Combine all sign classes into one
|
||||||
|
|
||||||
|
#Drainage dataset (⚠️ still missing)
|
||||||
|
print("⚠️ No Kaggle dataset found for drainage. Please add manually to dataset/train/drainage & val/drainage.")
|
||||||
|
# Streetlight dataset (⚠️ still missing)
|
||||||
|
print("⚠️ No Kaggle dataset found for streetlights. Please add manually to dataset/train/streetlight & val/streetlight.")
|
||||||
|
|
||||||
|
print("✅ All datasets downloaded, cleaned, and organized into 'dataset/'")
|
||||||
43
backend/test/Machine_Learning/oraganize_path.py
Normal file
43
backend/test/Machine_Learning/oraganize_path.py
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
import os
|
||||||
|
import shutil
|
||||||
|
import random
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
# ---------- CONFIG ----------
|
||||||
|
SRC_DIR = Path("dataset_downloads") # where new images are
|
||||||
|
DST_DIR = Path("dataset") # your main dataset folder
|
||||||
|
TRAIN_SPLIT = 0.8 # 80% train, 20% val
|
||||||
|
|
||||||
|
# Classes to process
|
||||||
|
NEW_CLASSES = ["broken streetlight", "drainage"]
|
||||||
|
|
||||||
|
for cls in NEW_CLASSES:
|
||||||
|
src_class_dir = SRC_DIR / cls
|
||||||
|
if not src_class_dir.exists():
|
||||||
|
print(f"⚠️ Source folder not found: {src_class_dir}")
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Prepare destination folders
|
||||||
|
train_dest = DST_DIR / "train" / cls
|
||||||
|
val_dest = DST_DIR / "val" / cls
|
||||||
|
train_dest.mkdir(parents=True, exist_ok=True)
|
||||||
|
val_dest.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
# List all images
|
||||||
|
images = list(src_class_dir.glob("*.*")) # jpg, png, jpeg
|
||||||
|
random.shuffle(images)
|
||||||
|
|
||||||
|
# Split
|
||||||
|
split_idx = int(len(images) * TRAIN_SPLIT)
|
||||||
|
train_imgs = images[:split_idx]
|
||||||
|
val_imgs = images[split_idx:]
|
||||||
|
|
||||||
|
# Move images
|
||||||
|
for img in train_imgs:
|
||||||
|
shutil.move(str(img), train_dest / img.name)
|
||||||
|
for img in val_imgs:
|
||||||
|
shutil.move(str(img), val_dest / img.name)
|
||||||
|
|
||||||
|
print(f"✅ Class '{cls}' added: {len(train_imgs)} train, {len(val_imgs)} val")
|
||||||
|
|
||||||
|
print("All new classes are organized and ready for training!")
|
||||||
62
backend/test/Machine_Learning/street_light_scrapping.py
Normal file
62
backend/test/Machine_Learning/street_light_scrapping.py
Normal file
@@ -0,0 +1,62 @@
|
|||||||
|
import os
|
||||||
|
import zipfile
|
||||||
|
import shutil
|
||||||
|
import random
|
||||||
|
from pathlib import Path
|
||||||
|
import requests
|
||||||
|
|
||||||
|
# ---------- CONFIG ----------
|
||||||
|
BASE_DIR = Path("dataset")
|
||||||
|
DOWNLOAD_DIR = Path("downloads")
|
||||||
|
CLASS_NAME = "streetlight"
|
||||||
|
TRAIN_SPLIT = 0.8 # 80% train, 20% val
|
||||||
|
|
||||||
|
os.makedirs(BASE_DIR / "train" / CLASS_NAME, exist_ok=True)
|
||||||
|
os.makedirs(BASE_DIR / "val" / CLASS_NAME, exist_ok=True)
|
||||||
|
os.makedirs(DOWNLOAD_DIR, exist_ok=True)
|
||||||
|
|
||||||
|
def download_from_github(url: str, out_path: Path):
|
||||||
|
print(f"⬇️ Trying download: {url}")
|
||||||
|
resp = requests.get(url, stream=True)
|
||||||
|
if resp.status_code != 200:
|
||||||
|
print(f"❌ Download failed: status code {resp.status_code}")
|
||||||
|
return False
|
||||||
|
with open(out_path, "wb") as f:
|
||||||
|
for chunk in resp.iter_content(8192):
|
||||||
|
f.write(chunk)
|
||||||
|
print(f"✅ Downloaded to {out_path}")
|
||||||
|
return True
|
||||||
|
|
||||||
|
def unzip_and_split(zip_path: Path, class_name: str):
|
||||||
|
extract_path = Path("tmp_extract")
|
||||||
|
with zipfile.ZipFile(zip_path, 'r') as zip_ref:
|
||||||
|
zip_ref.extractall(extract_path)
|
||||||
|
|
||||||
|
all_images = list(extract_path.rglob("*.jpg")) + list(extract_path.rglob("*.png")) + list(extract_path.rglob("*.jpeg"))
|
||||||
|
if not all_images:
|
||||||
|
print("⚠️ No images in extracted folder.")
|
||||||
|
return
|
||||||
|
|
||||||
|
random.shuffle(all_images)
|
||||||
|
split_idx = int(len(all_images) * TRAIN_SPLIT)
|
||||||
|
train = all_images[:split_idx]
|
||||||
|
val = all_images[split_idx:]
|
||||||
|
|
||||||
|
for img in train:
|
||||||
|
shutil.move(str(img), BASE_DIR / "train" / class_name / img.name)
|
||||||
|
for img in val:
|
||||||
|
shutil.move(str(img), BASE_DIR / "val" / class_name / img.name)
|
||||||
|
|
||||||
|
shutil.rmtree(extract_path)
|
||||||
|
print(f"✅ {class_name} split: {len(train)} train / {len(val)} val")
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
# Try the GitHub repo from the paper
|
||||||
|
streetlight_url = "https://github.com/Team16Project/Street-Light-Dataset/archive/refs/heads/main.zip"
|
||||||
|
zip_path = DOWNLOAD_DIR / "streetlight_dataset.zip"
|
||||||
|
|
||||||
|
ok = download_from_github(streetlight_url, zip_path)
|
||||||
|
if ok:
|
||||||
|
unzip_and_split(zip_path, CLASS_NAME)
|
||||||
|
else:
|
||||||
|
print("⚠️ Could not download streetlight dataset. You may need to find alternative source.")
|
||||||
40
backend/test/Machine_Learning/test_trained_ml.py
Normal file
40
backend/test/Machine_Learning/test_trained_ml.py
Normal file
@@ -0,0 +1,40 @@
|
|||||||
|
import torch
|
||||||
|
from torchvision import transforms, models
|
||||||
|
from PIL import Image
|
||||||
|
import os
|
||||||
|
|
||||||
|
# ---------- CONFIG ----------
|
||||||
|
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
||||||
|
NUM_CLASSES = 6
|
||||||
|
CLASS_NAMES = ["broken_streetlight","drainage","garbage", "pothole","signage", "streetlight"]
|
||||||
|
MODEL_PATH = "best_model.pth"
|
||||||
|
TEST_IMAGES_DIR = "images" # folder containing test images
|
||||||
|
|
||||||
|
# ---------- MODEL ----------
|
||||||
|
model = models.resnet18(weights=models.ResNet18_Weights.IMAGENET1K_V1)
|
||||||
|
model.fc = torch.nn.Linear(model.fc.in_features, NUM_CLASSES)
|
||||||
|
model.load_state_dict(torch.load(MODEL_PATH, map_location=DEVICE))
|
||||||
|
model = model.to(DEVICE)
|
||||||
|
model.eval()
|
||||||
|
|
||||||
|
# ---------- IMAGE PREPROCESS ----------
|
||||||
|
preprocess = transforms.Compose([
|
||||||
|
transforms.Resize((224, 224)),
|
||||||
|
transforms.ToTensor(),
|
||||||
|
])
|
||||||
|
|
||||||
|
# ---------- INFERENCE ----------
|
||||||
|
for image_name in os.listdir(TEST_IMAGES_DIR):
|
||||||
|
image_path = os.path.join(TEST_IMAGES_DIR, image_name)
|
||||||
|
if not image_path.lower().endswith(('.png', '.jpg', '.jpeg')):
|
||||||
|
continue
|
||||||
|
|
||||||
|
image = Image.open(image_path).convert("RGB")
|
||||||
|
input_tensor = preprocess(image).unsqueeze(0).to(DEVICE) # add batch dimension
|
||||||
|
|
||||||
|
with torch.no_grad():
|
||||||
|
outputs = model(input_tensor)
|
||||||
|
_, predicted = torch.max(outputs, 1)
|
||||||
|
predicted_class = CLASS_NAMES[predicted.item()]
|
||||||
|
|
||||||
|
print(f"{image_name} --> Predicted class: {predicted_class}")
|
||||||
41
backend/test/Machine_Learning/tets_sevarity.py
Normal file
41
backend/test/Machine_Learning/tets_sevarity.py
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
import cv2
|
||||||
|
from ultralytics import YOLO
|
||||||
|
|
||||||
|
# Load your trained YOLOv12 model
|
||||||
|
model = YOLO("checkpoints/pothole_detector/weights/best.pt") # Path to your trained weights
|
||||||
|
|
||||||
|
# Define severity thresholds (you can adjust these based on your dataset)
|
||||||
|
def classify_severity(box, image_height):
|
||||||
|
x1, y1, x2, y2 = box
|
||||||
|
area = (x2 - x1) * (y2 - y1)
|
||||||
|
if area > 50000 or y2 > image_height * 0.75:
|
||||||
|
return "High"
|
||||||
|
elif area > 20000 or y2 > image_height * 0.5:
|
||||||
|
return "Medium"
|
||||||
|
else:
|
||||||
|
return "Low"
|
||||||
|
|
||||||
|
# Draw bounding boxes with severity
|
||||||
|
def draw_boxes_and_severity(image, results):
|
||||||
|
for r in results: # iterate over Results objects
|
||||||
|
for box in r.boxes.xyxy: # xyxy format
|
||||||
|
x1, y1, x2, y2 = map(int, box.cpu().numpy())
|
||||||
|
conf = float(r.boxes.conf[0]) if hasattr(r.boxes, "conf") else 0.0
|
||||||
|
severity = classify_severity((x1, y1, x2, y2), image.shape[0])
|
||||||
|
color = (0, 255, 0) if severity == "Low" else (0, 255, 255) if severity == "Medium" else (0, 0, 255)
|
||||||
|
cv2.rectangle(image, (x1, y1), (x2, y2), color, 2)
|
||||||
|
cv2.putText(image, f"{severity} ({conf:.2f})", (x1, y1 - 10),
|
||||||
|
cv2.FONT_HERSHEY_SIMPLEX, 0.6, color, 2)
|
||||||
|
return image
|
||||||
|
|
||||||
|
# Detect potholes in an image
|
||||||
|
def detect_potholes(image_path, output_path="output.jpg"):
|
||||||
|
image = cv2.imread(image_path)
|
||||||
|
results = model(image) # Run inference
|
||||||
|
image = draw_boxes_and_severity(image, results)
|
||||||
|
cv2.imwrite(output_path, image)
|
||||||
|
print(f"Output saved to {output_path}")
|
||||||
|
|
||||||
|
# Example usage
|
||||||
|
if __name__ == "__main__":
|
||||||
|
detect_potholes(r"images\pothole_1.jpg")
|
||||||
17
backend/test/Machine_Learning/train_deetction.py
Normal file
17
backend/test/Machine_Learning/train_deetction.py
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
from ultralytics import YOLO
|
||||||
|
|
||||||
|
def train():
|
||||||
|
model = YOLO("yolov12n.pt") # pretrained YOLOv8 small
|
||||||
|
model.train(
|
||||||
|
data="D:/CTF_Hackathon/gensprintai2025/pothole-detection-yolov12.v2i.yolov12/data.yaml",
|
||||||
|
epochs=10,
|
||||||
|
imgsz=512,
|
||||||
|
batch=8,
|
||||||
|
device=0,
|
||||||
|
project="checkpoints",
|
||||||
|
name="pothole_detector",
|
||||||
|
exist_ok=True
|
||||||
|
)
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
train()
|
||||||
125
backend/test/Machine_Learning/train_ml.py
Normal file
125
backend/test/Machine_Learning/train_ml.py
Normal file
@@ -0,0 +1,125 @@
|
|||||||
|
import os
|
||||||
|
import torch
|
||||||
|
from torch import nn, optim
|
||||||
|
from torch.utils.data import DataLoader
|
||||||
|
from torchvision import datasets, transforms, models
|
||||||
|
from torch.cuda.amp import GradScaler, autocast
|
||||||
|
from torch.utils.tensorboard import SummaryWriter
|
||||||
|
import time
|
||||||
|
import psutil
|
||||||
|
|
||||||
|
# ---------- CONFIG ----------
|
||||||
|
DATA_DIR = "dataset" # dataset folder
|
||||||
|
BATCH_SIZE = 16
|
||||||
|
NUM_EPOCHS = 5
|
||||||
|
LR = 1e-4
|
||||||
|
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
||||||
|
NUM_CLASSES = 6 # pothole, streetlight, garbage
|
||||||
|
NUM_WORKERS = 10 # Windows-safe
|
||||||
|
|
||||||
|
# ---------- DATA ----------
|
||||||
|
train_transforms = transforms.Compose([
|
||||||
|
transforms.Resize((224, 224)),
|
||||||
|
transforms.RandomHorizontalFlip(),
|
||||||
|
transforms.RandomRotation(15),
|
||||||
|
transforms.ColorJitter(brightness=0.2, contrast=0.2, saturation=0.2),
|
||||||
|
transforms.ToTensor(),
|
||||||
|
])
|
||||||
|
|
||||||
|
val_transforms = transforms.Compose([
|
||||||
|
transforms.Resize((224, 224)),
|
||||||
|
transforms.ToTensor(),
|
||||||
|
])
|
||||||
|
|
||||||
|
train_dataset = datasets.ImageFolder(os.path.join(DATA_DIR, "train"), transform=train_transforms)
|
||||||
|
val_dataset = datasets.ImageFolder(os.path.join(DATA_DIR, "val"), transform=val_transforms)
|
||||||
|
|
||||||
|
train_loader = DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=NUM_WORKERS)
|
||||||
|
val_loader = DataLoader(val_dataset, batch_size=BATCH_SIZE, shuffle=False, num_workers=NUM_WORKERS)
|
||||||
|
|
||||||
|
# ---------- MODEL ----------
|
||||||
|
model = models.resnet18(weights=models.ResNet18_Weights.IMAGENET1K_V1)
|
||||||
|
model.fc = nn.Linear(model.fc.in_features, NUM_CLASSES)
|
||||||
|
model = model.to(DEVICE)
|
||||||
|
|
||||||
|
criterion = nn.CrossEntropyLoss()
|
||||||
|
optimizer = optim.Adam(model.parameters(), lr=LR)
|
||||||
|
scaler = GradScaler() # Mixed precision
|
||||||
|
|
||||||
|
# ---------- TENSORBOARD ----------
|
||||||
|
writer = SummaryWriter(log_dir="runs/streetlight_classification")
|
||||||
|
|
||||||
|
# ---------- DEBUG FUNCTIONS ----------
|
||||||
|
def print_gpu_memory():
|
||||||
|
if DEVICE.type == "cuda":
|
||||||
|
print(f"GPU Memory Allocated: {torch.cuda.memory_allocated()/1024**2:.2f} MB")
|
||||||
|
print(f"GPU Memory Cached: {torch.cuda.memory_reserved()/1024**2:.2f} MB")
|
||||||
|
|
||||||
|
def print_cpu_memory():
|
||||||
|
mem = psutil.virtual_memory()
|
||||||
|
print(f"CPU Memory Usage: {mem.percent}% ({mem.used/1024**2:.2f}MB / {mem.total/1024**2:.2f}MB)")
|
||||||
|
|
||||||
|
# ---------- TRAINING FUNCTION ----------
|
||||||
|
def train_model(num_epochs):
|
||||||
|
best_acc = 0.0
|
||||||
|
for epoch in range(num_epochs):
|
||||||
|
start_time = time.time()
|
||||||
|
model.train()
|
||||||
|
running_loss = 0.0
|
||||||
|
|
||||||
|
for i, (inputs, labels) in enumerate(train_loader):
|
||||||
|
inputs, labels = inputs.to(DEVICE), labels.to(DEVICE)
|
||||||
|
optimizer.zero_grad()
|
||||||
|
|
||||||
|
with autocast():
|
||||||
|
outputs = model(inputs)
|
||||||
|
loss = criterion(outputs, labels)
|
||||||
|
|
||||||
|
scaler.scale(loss).backward()
|
||||||
|
|
||||||
|
# Debug gradients for first batch
|
||||||
|
if i == 0 and epoch == 0:
|
||||||
|
for name, param in model.named_parameters():
|
||||||
|
if param.grad is not None:
|
||||||
|
print(f"Grad {name}: mean={param.grad.mean():.6f}, std={param.grad.std():.6f}")
|
||||||
|
|
||||||
|
scaler.step(optimizer)
|
||||||
|
scaler.update()
|
||||||
|
running_loss += loss.item()
|
||||||
|
|
||||||
|
if i % 10 == 0:
|
||||||
|
print(f"[Epoch {epoch+1}][Batch {i}/{len(train_loader)}] Loss: {loss.item():.4f}")
|
||||||
|
print_gpu_memory()
|
||||||
|
print_cpu_memory()
|
||||||
|
|
||||||
|
avg_loss = running_loss / len(train_loader)
|
||||||
|
|
||||||
|
# ---------- VALIDATION ----------
|
||||||
|
model.eval()
|
||||||
|
correct, total = 0, 0
|
||||||
|
with torch.no_grad():
|
||||||
|
for inputs, labels in val_loader:
|
||||||
|
inputs, labels = inputs.to(DEVICE), labels.to(DEVICE)
|
||||||
|
outputs = model(inputs)
|
||||||
|
_, preds = torch.max(outputs, 1)
|
||||||
|
correct += (preds == labels).sum().item()
|
||||||
|
total += labels.size(0)
|
||||||
|
val_acc = correct / total
|
||||||
|
|
||||||
|
print(f"Epoch [{epoch+1}/{num_epochs}] completed in {time.time()-start_time:.2f}s")
|
||||||
|
print(f"Train Loss: {avg_loss:.4f}, Val Accuracy: {val_acc:.4f}\n")
|
||||||
|
|
||||||
|
# TensorBoard logging
|
||||||
|
writer.add_scalar("Loss/train", avg_loss, epoch)
|
||||||
|
writer.add_scalar("Accuracy/val", val_acc, epoch)
|
||||||
|
|
||||||
|
# Save best model
|
||||||
|
if val_acc > best_acc:
|
||||||
|
best_acc = val_acc
|
||||||
|
torch.save(model.state_dict(), "best_model.pth")
|
||||||
|
print("✅ Saved best model.")
|
||||||
|
|
||||||
|
print(f"Training finished. Best Val Accuracy: {best_acc:.4f}")
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
train_model(NUM_EPOCHS)
|
||||||
19
backend/test/check_torch.py
Normal file
19
backend/test/check_torch.py
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
import torch
|
||||||
|
|
||||||
|
print("🔥 PyTorch version:", torch.__version__)
|
||||||
|
|
||||||
|
# Always True if torch installed
|
||||||
|
print("✅ Torch is available:", torch.backends.mkl.is_available() or torch.backends.openmp.is_available())
|
||||||
|
|
||||||
|
# Check CUDA / GPU
|
||||||
|
print("🖥️ CUDA available:", torch.cuda.is_available())
|
||||||
|
if torch.cuda.is_available():
|
||||||
|
print(" -> CUDA device count:", torch.cuda.device_count())
|
||||||
|
print(" -> Current device:", torch.cuda.current_device())
|
||||||
|
print(" -> GPU name:", torch.cuda.get_device_name(torch.cuda.current_device()))
|
||||||
|
else:
|
||||||
|
print(" -> Running on CPU only")
|
||||||
|
|
||||||
|
# Check MPS (for Apple Silicon M1/M2 Macs)
|
||||||
|
if torch.backends.mps.is_available():
|
||||||
|
print("🍎 MPS (Apple GPU) available")
|
||||||
130
backend/test/test_backend.py
Normal file
130
backend/test/test_backend.py
Normal file
@@ -0,0 +1,130 @@
|
|||||||
|
import requests
|
||||||
|
import json
|
||||||
|
import uuid
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
BASE_URL = "http://127.0.0.1:8000/api" # API root
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# Helper function to log responses nicely
|
||||||
|
# ----------------------
|
||||||
|
def log_response(step_name, response):
|
||||||
|
print(f"\n=== {step_name} ===")
|
||||||
|
print("Status Code:", response.status_code)
|
||||||
|
try:
|
||||||
|
print("Response JSON:", json.dumps(response.json(), indent=2))
|
||||||
|
except Exception:
|
||||||
|
print("Response Text:", response.text)
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# 1. Create a new user
|
||||||
|
# ----------------------
|
||||||
|
def create_user(name, email):
|
||||||
|
url = f"{BASE_URL}/users"
|
||||||
|
payload = {"name": name, "email": email}
|
||||||
|
response = requests.post(url, json=payload)
|
||||||
|
log_response("CREATE USER", response)
|
||||||
|
if response.status_code == 200:
|
||||||
|
user_data = response.json()
|
||||||
|
return user_data.get("id"), user_data.get("name")
|
||||||
|
return None, None
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# 2. Submit a new report/ticket
|
||||||
|
# ----------------------
|
||||||
|
def submit_report(user_id, image_path):
|
||||||
|
url = f"{BASE_URL}/report"
|
||||||
|
if not Path(image_path).exists():
|
||||||
|
print(f"Image file not found: {image_path}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
data = {
|
||||||
|
"user_id": user_id,
|
||||||
|
"latitude": 3.12345,
|
||||||
|
"longitude": 101.54321,
|
||||||
|
"description": "Automated test report"
|
||||||
|
}
|
||||||
|
with open(image_path, "rb") as img_file:
|
||||||
|
files = {"image": img_file}
|
||||||
|
response = requests.post(url, data=data, files=files)
|
||||||
|
log_response("SUBMIT REPORT", response)
|
||||||
|
if response.status_code == 201:
|
||||||
|
return response.json().get("ticket_id")
|
||||||
|
return None
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# 3. Fetch all tickets
|
||||||
|
# ----------------------
|
||||||
|
def get_all_tickets():
|
||||||
|
url = f"{BASE_URL}/tickets"
|
||||||
|
response = requests.get(url)
|
||||||
|
log_response("GET ALL TICKETS", response)
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# 4. Fetch a single ticket
|
||||||
|
# ----------------------
|
||||||
|
def get_ticket(ticket_id):
|
||||||
|
url = f"{BASE_URL}/tickets/{ticket_id}"
|
||||||
|
response = requests.get(url)
|
||||||
|
log_response(f"GET TICKET {ticket_id}", response)
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# 5. Update ticket status
|
||||||
|
# ----------------------
|
||||||
|
def update_ticket(ticket_id, new_status):
|
||||||
|
url = f"{BASE_URL}/tickets/{ticket_id}"
|
||||||
|
payload = {"new_status": new_status} # <-- use new_status to match backend
|
||||||
|
response = requests.patch(url, json=payload)
|
||||||
|
log_response(f"UPDATE TICKET {ticket_id} TO {new_status}", response)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# 6. Fetch analytics
|
||||||
|
# ----------------------
|
||||||
|
def get_analytics():
|
||||||
|
url = f"{BASE_URL}/analytics"
|
||||||
|
response = requests.get(url)
|
||||||
|
log_response("GET ANALYTICS", response)
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# Main test flow
|
||||||
|
# ----------------------
|
||||||
|
if __name__ == "__main__":
|
||||||
|
print("=== STARTING BACKEND TEST SCRIPT ===")
|
||||||
|
|
||||||
|
# # Step 1: Create user
|
||||||
|
# user_name = "Test User"
|
||||||
|
# user_email = f"testuser1@gmail.com"
|
||||||
|
# user_id, returned_name = create_user(user_name, user_email)
|
||||||
|
# if user_id:
|
||||||
|
# print(f"Created user: {returned_name} with ID: {user_id}")
|
||||||
|
# else:
|
||||||
|
# print("Failed to create user, aborting script.")
|
||||||
|
# exit(1)
|
||||||
|
|
||||||
|
user_id = "5fc2ac8b-6d77-4567-918e-39e31f749c79" # Use existing user ID for testing
|
||||||
|
|
||||||
|
# Step 2: Submit a ticket
|
||||||
|
image_file = r"D:\CTF_Hackathon\gensprintai2025\images\potholes.jpeg" # Update this path
|
||||||
|
ticket_id = submit_report(user_id, image_file)
|
||||||
|
if ticket_id:
|
||||||
|
print(f"Created ticket with ID: {ticket_id}")
|
||||||
|
|
||||||
|
# # Step 3: Fetch all tickets
|
||||||
|
# get_all_tickets()
|
||||||
|
|
||||||
|
# Step 4: Fetch single ticket
|
||||||
|
get_ticket(ticket_id)
|
||||||
|
|
||||||
|
# Step 5: Update ticket status to 'In Progress' then 'Fixed'
|
||||||
|
update_ticket(ticket_id, "In Progress")
|
||||||
|
get_ticket(ticket_id)
|
||||||
|
update_ticket(ticket_id, "Fixed")
|
||||||
|
|
||||||
|
# Step 6: Fetch analytics
|
||||||
|
get_analytics()
|
||||||
|
else:
|
||||||
|
print("Ticket creation failed, skipping ticket tests.")
|
||||||
|
|
||||||
|
print("\n=== BACKEND TEST SCRIPT COMPLETED ===")
|
||||||
33
backend/test_ai_service.py
Normal file
33
backend/test_ai_service.py
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
import os
|
||||||
|
from app.services.global_ai import get_ai_service
|
||||||
|
|
||||||
|
# Initialize AI service
|
||||||
|
ai_service = get_ai_service()
|
||||||
|
|
||||||
|
if ai_service is None:
|
||||||
|
print("AI Service failed to initialize.")
|
||||||
|
exit(1)
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# Test classification
|
||||||
|
# ----------------------
|
||||||
|
test_image = "D:\CTF_Hackathon\gensprintai2025\images\dtreet_light_1.jpg"
|
||||||
|
|
||||||
|
if not os.path.exists(test_image):
|
||||||
|
print(f"Test image not found at {test_image}")
|
||||||
|
exit(1)
|
||||||
|
|
||||||
|
try:
|
||||||
|
category = ai_service.classify_category(test_image)
|
||||||
|
print(f"Classification result: {category}")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Classification failed: {e}")
|
||||||
|
|
||||||
|
# ----------------------
|
||||||
|
# Test detection / severity
|
||||||
|
# ----------------------
|
||||||
|
try:
|
||||||
|
severity, output_path = ai_service.detect_pothole_severity(test_image, "tests/output.jpg")
|
||||||
|
print(f"Detection result: Severity={severity}, Output saved to {output_path}")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Detection failed: {e}")
|
||||||
37
backend/test_server.py
Normal file
37
backend/test_server.py
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
|
||||||
|
# Add the current directory to Python path
|
||||||
|
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||||
|
|
||||||
|
try:
|
||||||
|
print("Testing imports...")
|
||||||
|
from app.database import Base, engine
|
||||||
|
print("✓ Database imports successful")
|
||||||
|
|
||||||
|
from app.models.ticket_model import User, Ticket, TicketStatus, SeverityLevel
|
||||||
|
print("✓ Model imports successful")
|
||||||
|
|
||||||
|
from app.services.ticket_service import TicketService
|
||||||
|
print("✓ Service imports successful")
|
||||||
|
|
||||||
|
from app.services.global_ai import init_ai_service
|
||||||
|
print("✓ AI service imports successful")
|
||||||
|
|
||||||
|
print("\nTesting database connection...")
|
||||||
|
Base.metadata.create_all(bind=engine)
|
||||||
|
print("✓ Database initialized successfully")
|
||||||
|
|
||||||
|
print("\nTesting AI service initialization...")
|
||||||
|
ai_service = init_ai_service()
|
||||||
|
print("✓ AI service initialized successfully")
|
||||||
|
|
||||||
|
print("\n✅ All tests passed! The backend should work correctly.")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"❌ Error: {e}")
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
sys.exit(1)
|
||||||
127
dashboard/app.js
127
dashboard/app.js
@@ -11,6 +11,32 @@ const STATUS_COLOR = { submitted:'#1976D2', in_progress:'#7B1FA2', fixed:'#455A6
|
|||||||
|
|
||||||
function fetchJSON(path){ return fetch(path).then(r=>r.json()); }
|
function fetchJSON(path){ return fetch(path).then(r=>r.json()); }
|
||||||
|
|
||||||
|
// Normalize API data to expected format
|
||||||
|
function normalizeReportData(report) {
|
||||||
|
// If it's already in the expected format (from demo data), return as is
|
||||||
|
if (report.location && report.location.lat !== undefined) {
|
||||||
|
return report;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert API format to expected format
|
||||||
|
return {
|
||||||
|
id: report.ticket_id,
|
||||||
|
category: report.category || 'other',
|
||||||
|
severity: report.severity || 'low',
|
||||||
|
status: report.status || 'submitted',
|
||||||
|
notes: report.description || '',
|
||||||
|
location: {
|
||||||
|
lat: report.latitude,
|
||||||
|
lng: report.longitude
|
||||||
|
},
|
||||||
|
createdAt: report.created_at,
|
||||||
|
updatedAt: report.updated_at,
|
||||||
|
// Add missing fields with defaults
|
||||||
|
userId: report.user_id,
|
||||||
|
imagePath: report.image_path
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
function useI18n(initialLang='en'){
|
function useI18n(initialLang='en'){
|
||||||
const [lang,setLang] = useState(localStorage.getItem('lang') || initialLang);
|
const [lang,setLang] = useState(localStorage.getItem('lang') || initialLang);
|
||||||
const [map,setMap] = useState({en:null,ms:null});
|
const [map,setMap] = useState({en:null,ms:null});
|
||||||
@@ -61,10 +87,32 @@ function App(){
|
|||||||
const [heatEnabled,setHeatEnabled] = useState(false);
|
const [heatEnabled,setHeatEnabled] = useState(false);
|
||||||
|
|
||||||
useEffect(()=>{
|
useEffect(()=>{
|
||||||
fetchJSON('./data/demo-reports.json').then(data=>{
|
// Try to fetch from backend API first, fallback to demo data
|
||||||
setRawData(data);
|
fetch('http://127.0.0.1:8000/api/tickets')
|
||||||
setLoading(false);
|
.then(r => r.ok ? r.json() : Promise.reject('API not available'))
|
||||||
}).catch(err=>{ console.error(err); setLoading(false); });
|
.then(data => {
|
||||||
|
console.log('Loaded data from API:', data.length, 'reports');
|
||||||
|
const normalizedData = data.map(normalizeReportData);
|
||||||
|
setRawData(normalizedData);
|
||||||
|
setLoading(false);
|
||||||
|
})
|
||||||
|
.catch(err => {
|
||||||
|
console.log('API not available, using demo data:', err);
|
||||||
|
return fetchJSON('./data/demo-reports.json');
|
||||||
|
})
|
||||||
|
.then(data => {
|
||||||
|
if (data) {
|
||||||
|
console.log('Loaded demo data:', data.length, 'reports');
|
||||||
|
// Demo data is already in the correct format, but normalize just in case
|
||||||
|
const normalizedData = data.map(normalizeReportData);
|
||||||
|
setRawData(normalizedData);
|
||||||
|
}
|
||||||
|
setLoading(false);
|
||||||
|
})
|
||||||
|
.catch(err => {
|
||||||
|
console.error('Error loading data:', err);
|
||||||
|
setLoading(false);
|
||||||
|
});
|
||||||
},[]);
|
},[]);
|
||||||
|
|
||||||
useEffect(()=>{
|
useEffect(()=>{
|
||||||
@@ -206,21 +254,64 @@ function App(){
|
|||||||
});
|
});
|
||||||
},[filtered]);
|
},[filtered]);
|
||||||
|
|
||||||
const cycleStatus = (reportId)=>{
|
const cycleStatus = async (reportId)=>{
|
||||||
setRawData(prev=>{
|
try {
|
||||||
const out = prev.map(r=>{
|
// Find the current report to get its status
|
||||||
if(r.id !== reportId) return r;
|
const currentReport = rawData.find(r => r.id === reportId);
|
||||||
const idx = STATUSES.indexOf(r.status);
|
if (!currentReport) return;
|
||||||
const ni = (idx + 1) % STATUSES.length;
|
|
||||||
return {...r, status: STATUSES[ni], updatedAt: new Date().toISOString() };
|
const idx = STATUSES.indexOf(currentReport.status);
|
||||||
});
|
const nextStatus = STATUSES[(idx + 1) % STATUSES.length];
|
||||||
// if the currently selected item was updated, update the selected state too
|
|
||||||
if(selected && selected.id === reportId){
|
// Try to update via API first
|
||||||
const newSel = out.find(r=>r.id === reportId);
|
const success = await fetch(`http://127.0.0.1:8000/api/tickets/${reportId}?new_status=${encodeURIComponent(nextStatus)}`, {
|
||||||
setSelected(newSel || null);
|
method: 'PATCH'
|
||||||
|
}).then(r => r.ok);
|
||||||
|
|
||||||
|
if (success) {
|
||||||
|
// If API update successful, refresh data from API
|
||||||
|
const response = await fetch('http://127.0.0.1:8000/api/tickets');
|
||||||
|
if (response.ok) {
|
||||||
|
const data = await response.json();
|
||||||
|
const normalizedData = data.map(normalizeReportData);
|
||||||
|
setRawData(normalizedData);
|
||||||
|
|
||||||
|
// Update selected item
|
||||||
|
const updatedReport = normalizedData.find(r => r.id === reportId);
|
||||||
|
setSelected(updatedReport || null);
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
console.error('Failed to update status via API');
|
||||||
|
// Fallback to local update
|
||||||
|
setRawData(prev=>{
|
||||||
|
const out = prev.map(r=>{
|
||||||
|
if(r.id !== reportId) return r;
|
||||||
|
return {...r, status: nextStatus, updatedAt: new Date().toISOString() };
|
||||||
|
});
|
||||||
|
if(selected && selected.id === reportId){
|
||||||
|
const newSel = out.find(r=>r.id === reportId);
|
||||||
|
setSelected(newSel || null);
|
||||||
|
}
|
||||||
|
return out;
|
||||||
|
});
|
||||||
}
|
}
|
||||||
return out;
|
} catch (error) {
|
||||||
});
|
console.error('Error updating status:', error);
|
||||||
|
// Fallback to local update
|
||||||
|
setRawData(prev=>{
|
||||||
|
const out = prev.map(r=>{
|
||||||
|
if(r.id !== reportId) return r;
|
||||||
|
const idx = STATUSES.indexOf(r.status);
|
||||||
|
const ni = (idx + 1) % STATUSES.length;
|
||||||
|
return {...r, status: STATUSES[ni], updatedAt: new Date().toISOString() };
|
||||||
|
});
|
||||||
|
if(selected && selected.id === reportId){
|
||||||
|
const newSel = out.find(r=>r.id === reportId);
|
||||||
|
setSelected(newSel || null);
|
||||||
|
}
|
||||||
|
return out;
|
||||||
|
});
|
||||||
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
const openInMaps = (r)=>{
|
const openInMaps = (r)=>{
|
||||||
|
|||||||
217
lib/services/api_service.dart
Normal file
217
lib/services/api_service.dart
Normal file
@@ -0,0 +1,217 @@
|
|||||||
|
import 'dart:convert';
|
||||||
|
import 'dart:io';
|
||||||
|
import 'package:flutter/foundation.dart';
|
||||||
|
import 'package:http/http.dart' as http;
|
||||||
|
import 'package:uuid/uuid.dart';
|
||||||
|
import '../models/report.dart';
|
||||||
|
|
||||||
|
/// Service for communicating with the FixMate Backend API
|
||||||
|
class ApiService {
|
||||||
|
// Configure this to match your backend URL
|
||||||
|
static const String _baseUrl = 'http://127.0.0.1:8000/api';
|
||||||
|
static const String _uploadsUrl = 'http://127.0.0.1:8000/static/uploads';
|
||||||
|
|
||||||
|
// Create a user ID for this device if not exists
|
||||||
|
static Future<String> _getOrCreateUserId() async {
|
||||||
|
// For now, generate a UUID for this device
|
||||||
|
// In a real app, this would be stored securely
|
||||||
|
return const Uuid().v4();
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Create a new user
|
||||||
|
static Future<String> createUser({required String name, required String email}) async {
|
||||||
|
try {
|
||||||
|
final response = await http.post(
|
||||||
|
Uri.parse('$_baseUrl/users'),
|
||||||
|
headers: {'Content-Type': 'application/json'},
|
||||||
|
body: json.encode({
|
||||||
|
'name': name,
|
||||||
|
'email': email,
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
|
||||||
|
if (response.statusCode == 200) {
|
||||||
|
final data = json.decode(response.body);
|
||||||
|
return data['id'] as String;
|
||||||
|
} else {
|
||||||
|
throw Exception('Failed to create user: ${response.body}');
|
||||||
|
}
|
||||||
|
} catch (e) {
|
||||||
|
print('Error creating user: $e');
|
||||||
|
rethrow;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Submit a report to the backend
|
||||||
|
static Future<String> submitReport({
|
||||||
|
required double latitude,
|
||||||
|
required double longitude,
|
||||||
|
required String description,
|
||||||
|
required List<int> imageBytes,
|
||||||
|
required String imageName,
|
||||||
|
}) async {
|
||||||
|
try {
|
||||||
|
final userId = await _getOrCreateUserId();
|
||||||
|
|
||||||
|
var request = http.MultipartRequest('POST', Uri.parse('$_baseUrl/report'));
|
||||||
|
request.fields['user_id'] = userId;
|
||||||
|
request.fields['latitude'] = latitude.toString();
|
||||||
|
request.fields['longitude'] = longitude.toString();
|
||||||
|
request.fields['description'] = description;
|
||||||
|
|
||||||
|
// Add the image file
|
||||||
|
request.files.add(
|
||||||
|
http.MultipartFile.fromBytes(
|
||||||
|
'image',
|
||||||
|
imageBytes,
|
||||||
|
filename: imageName,
|
||||||
|
),
|
||||||
|
);
|
||||||
|
|
||||||
|
final response = await request.send();
|
||||||
|
|
||||||
|
if (response.statusCode == 201) {
|
||||||
|
final responseBody = await response.stream.bytesToString();
|
||||||
|
final data = json.decode(responseBody);
|
||||||
|
return data['ticket_id'] as String;
|
||||||
|
} else {
|
||||||
|
final responseBody = await response.stream.bytesToString();
|
||||||
|
throw Exception('Failed to submit report: $responseBody');
|
||||||
|
}
|
||||||
|
} catch (e) {
|
||||||
|
print('Error submitting report: $e');
|
||||||
|
rethrow;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get all tickets from the backend
|
||||||
|
static Future<List<Report>> getReports() async {
|
||||||
|
try {
|
||||||
|
final response = await http.get(Uri.parse('$_baseUrl/tickets'));
|
||||||
|
|
||||||
|
if (response.statusCode == 200) {
|
||||||
|
final List<dynamic> data = json.decode(response.body);
|
||||||
|
return data.map((json) => _convertApiTicketToReport(json)).toList();
|
||||||
|
} else {
|
||||||
|
throw Exception('Failed to get reports: ${response.body}');
|
||||||
|
}
|
||||||
|
} catch (e) {
|
||||||
|
print('Error getting reports: $e');
|
||||||
|
// Return empty list if API is not available (fallback to local storage)
|
||||||
|
return [];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get a single ticket by ID
|
||||||
|
static Future<Report?> getReportById(String ticketId) async {
|
||||||
|
try {
|
||||||
|
final response = await http.get(Uri.parse('$_baseUrl/tickets/$ticketId'));
|
||||||
|
|
||||||
|
if (response.statusCode == 200) {
|
||||||
|
final data = json.decode(response.body);
|
||||||
|
return _convertApiTicketToReport(data);
|
||||||
|
} else {
|
||||||
|
throw Exception('Failed to get report: ${response.body}');
|
||||||
|
}
|
||||||
|
} catch (e) {
|
||||||
|
print('Error getting report: $e');
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Update ticket status
|
||||||
|
static Future<bool> updateReportStatus(String ticketId, String status) async {
|
||||||
|
try {
|
||||||
|
final response = await http.patch(
|
||||||
|
Uri.parse('$_baseUrl/tickets/$ticketId?new_status=$status'),
|
||||||
|
);
|
||||||
|
|
||||||
|
return response.statusCode == 200;
|
||||||
|
} catch (e) {
|
||||||
|
print('Error updating report status: $e');
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get analytics data
|
||||||
|
static Future<Map<String, dynamic>> getAnalytics() async {
|
||||||
|
try {
|
||||||
|
final response = await http.get(Uri.parse('$_baseUrl/analytics'));
|
||||||
|
|
||||||
|
if (response.statusCode == 200) {
|
||||||
|
return json.decode(response.body) as Map<String, dynamic>;
|
||||||
|
} else {
|
||||||
|
throw Exception('Failed to get analytics: ${response.body}');
|
||||||
|
}
|
||||||
|
} catch (e) {
|
||||||
|
print('Error getting analytics: $e');
|
||||||
|
return {};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Convert API ticket response to Report model
|
||||||
|
static Report _convertApiTicketToReport(Map<String, dynamic> data) {
|
||||||
|
return Report(
|
||||||
|
id: data['ticket_id'] ?? '',
|
||||||
|
category: _normalizeCategory(data['category'] ?? ''),
|
||||||
|
severity: _normalizeSeverity(data['severity'] ?? 'N/A'),
|
||||||
|
status: _normalizeStatus(data['status'] ?? 'New'),
|
||||||
|
description: data['description'] ?? '',
|
||||||
|
latitude: data['latitude']?.toDouble() ?? 0.0,
|
||||||
|
longitude: data['longitude']?.toDouble() ?? 0.0,
|
||||||
|
createdAt: DateTime.parse(data['created_at'] ?? DateTime.now().toIso8601String()),
|
||||||
|
updatedAt: DateTime.parse(data['updated_at'] ?? DateTime.now().toIso8601String()),
|
||||||
|
// Image path will be constructed from the API response
|
||||||
|
imagePath: data['image_path'] != null ? '$_uploadsUrl/${data['image_path'].split('/').last}' : null,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Normalize category names to match the app's expected format
|
||||||
|
static String _normalizeCategory(String category) {
|
||||||
|
// Convert API categories to app categories
|
||||||
|
switch (category.toLowerCase()) {
|
||||||
|
case 'pothole':
|
||||||
|
return 'pothole';
|
||||||
|
case 'streetlight':
|
||||||
|
case 'broken_streetlight':
|
||||||
|
return 'streetlight';
|
||||||
|
case 'garbage':
|
||||||
|
return 'trash';
|
||||||
|
case 'signage':
|
||||||
|
return 'signage';
|
||||||
|
case 'drainage':
|
||||||
|
return 'drainage';
|
||||||
|
default:
|
||||||
|
return 'other';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Normalize severity levels
|
||||||
|
static String _normalizeSeverity(String severity) {
|
||||||
|
switch (severity.toLowerCase()) {
|
||||||
|
case 'high':
|
||||||
|
return 'high';
|
||||||
|
case 'medium':
|
||||||
|
return 'medium';
|
||||||
|
case 'low':
|
||||||
|
return 'low';
|
||||||
|
default:
|
||||||
|
return 'low'; // Default to low if unknown
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Normalize status values
|
||||||
|
static String _normalizeStatus(String status) {
|
||||||
|
switch (status.toLowerCase()) {
|
||||||
|
case 'new':
|
||||||
|
return 'submitted';
|
||||||
|
case 'in progress':
|
||||||
|
case 'in_progress':
|
||||||
|
return 'in_progress';
|
||||||
|
case 'fixed':
|
||||||
|
return 'fixed';
|
||||||
|
default:
|
||||||
|
return 'submitted';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,16 +1,29 @@
|
|||||||
import 'dart:convert';
|
import 'dart:convert';
|
||||||
import 'dart:io';
|
import 'dart:io';
|
||||||
|
import 'dart:typed_data';
|
||||||
import 'package:flutter/foundation.dart';
|
import 'package:flutter/foundation.dart';
|
||||||
import 'package:shared_preferences/shared_preferences.dart';
|
import 'package:shared_preferences/shared_preferences.dart';
|
||||||
import 'package:path_provider/path_provider.dart';
|
import 'package:path_provider/path_provider.dart';
|
||||||
import '../models/report.dart';
|
import '../models/report.dart';
|
||||||
|
import 'api_service.dart';
|
||||||
|
|
||||||
/// Service for persisting reports and managing local storage
|
/// Service for persisting reports and managing local storage
|
||||||
class StorageService {
|
class StorageService {
|
||||||
static const String _reportsKey = 'reports_v1';
|
static const String _reportsKey = 'reports_v1';
|
||||||
|
|
||||||
/// Get all reports from storage
|
/// Get all reports from storage (API first, fallback to local)
|
||||||
static Future<List<Report>> getReports() async {
|
static Future<List<Report>> getReports() async {
|
||||||
|
try {
|
||||||
|
// Try API first
|
||||||
|
final apiReports = await ApiService.getReports();
|
||||||
|
if (apiReports.isNotEmpty) {
|
||||||
|
return apiReports;
|
||||||
|
}
|
||||||
|
} catch (e) {
|
||||||
|
print('API not available, falling back to local storage: $e');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fallback to local storage
|
||||||
try {
|
try {
|
||||||
final prefs = await SharedPreferences.getInstance();
|
final prefs = await SharedPreferences.getInstance();
|
||||||
final reportsJson = prefs.getString(_reportsKey);
|
final reportsJson = prefs.getString(_reportsKey);
|
||||||
@@ -27,8 +40,31 @@ class StorageService {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Save a single report to storage
|
/// Save a single report to storage (API first, fallback to local)
|
||||||
static Future<bool> saveReport(Report report) async {
|
static Future<bool> saveReport(Report report) async {
|
||||||
|
try {
|
||||||
|
// Try API first - convert Report to API format
|
||||||
|
final imageBytes = report.photoPath != null
|
||||||
|
? await _getImageBytes(report)
|
||||||
|
: report.base64Photo != null
|
||||||
|
? base64.decode(report.base64Photo!)
|
||||||
|
: null;
|
||||||
|
|
||||||
|
if (imageBytes != null) {
|
||||||
|
await ApiService.submitReport(
|
||||||
|
latitude: report.location.lat,
|
||||||
|
longitude: report.location.lng,
|
||||||
|
description: report.notes ?? '',
|
||||||
|
imageBytes: imageBytes,
|
||||||
|
imageName: '${report.id}.jpg',
|
||||||
|
);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
} catch (e) {
|
||||||
|
print('API not available, falling back to local storage: $e');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fallback to local storage
|
||||||
try {
|
try {
|
||||||
final reports = await getReports();
|
final reports = await getReports();
|
||||||
final existingIndex = reports.indexWhere((r) => r.id == report.id);
|
final existingIndex = reports.indexWhere((r) => r.id == report.id);
|
||||||
@@ -46,8 +82,21 @@ class StorageService {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Delete a report from storage
|
/// Delete a report from storage (API first, fallback to local)
|
||||||
static Future<bool> deleteReport(String reportId) async {
|
static Future<bool> deleteReport(String reportId) async {
|
||||||
|
try {
|
||||||
|
// Try API first (note: API doesn't have delete endpoint, so this will always fallback)
|
||||||
|
final apiReport = await ApiService.getReportById(reportId);
|
||||||
|
if (apiReport != null) {
|
||||||
|
// For now, the API doesn't have a delete endpoint, so we can't delete from API
|
||||||
|
// This would need to be added to the backend
|
||||||
|
print('API delete not available, keeping local copy');
|
||||||
|
}
|
||||||
|
} catch (e) {
|
||||||
|
print('API not available: $e');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fallback to local storage
|
||||||
try {
|
try {
|
||||||
final reports = await getReports();
|
final reports = await getReports();
|
||||||
final updatedReports = reports.where((r) => r.id != reportId).toList();
|
final updatedReports = reports.where((r) => r.id != reportId).toList();
|
||||||
@@ -66,9 +115,10 @@ class StorageService {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Clear all reports from storage
|
/// Clear all reports from storage (local only, API doesn't have clear endpoint)
|
||||||
static Future<bool> clearAllReports() async {
|
static Future<bool> clearAllReports() async {
|
||||||
try {
|
try {
|
||||||
|
// Note: API doesn't have a clear all endpoint, so we only clear local storage
|
||||||
final prefs = await SharedPreferences.getInstance();
|
final prefs = await SharedPreferences.getInstance();
|
||||||
await prefs.remove(_reportsKey);
|
await prefs.remove(_reportsKey);
|
||||||
|
|
||||||
@@ -177,6 +227,30 @@ class StorageService {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Get image bytes for API submission
|
||||||
|
static Future<Uint8List?> _getImageBytes(Report report) async {
|
||||||
|
if (report.photoPath != null) {
|
||||||
|
try {
|
||||||
|
final file = File(report.photoPath!);
|
||||||
|
if (await file.exists()) {
|
||||||
|
return await file.readAsBytes();
|
||||||
|
}
|
||||||
|
} catch (e) {
|
||||||
|
print('Error reading image file: $e');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (report.base64Photo != null) {
|
||||||
|
try {
|
||||||
|
return base64.decode(report.base64Photo!);
|
||||||
|
} catch (e) {
|
||||||
|
print('Error decoding base64 image: $e');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
/// Get storage statistics
|
/// Get storage statistics
|
||||||
static Future<StorageStats> getStorageStats() async {
|
static Future<StorageStats> getStorageStats() async {
|
||||||
try {
|
try {
|
||||||
|
|||||||
@@ -43,6 +43,7 @@ dependencies:
|
|||||||
uuid: ^4.5.1
|
uuid: ^4.5.1
|
||||||
provider: ^6.1.1
|
provider: ^6.1.1
|
||||||
url_launcher: ^6.3.0
|
url_launcher: ^6.3.0
|
||||||
|
http: ^1.2.2
|
||||||
|
|
||||||
dev_dependencies:
|
dev_dependencies:
|
||||||
flutter_test:
|
flutter_test:
|
||||||
|
|||||||
Reference in New Issue
Block a user