Takazudo Modular Docs

Type to search...

to open search from anywhere

Product Markdown Sync Architecture

Product Markdown Sync Architecture and Implementation Plan

System Overview

Product Markdown Sync will be a Node.js-based sub-package located at /sub-packages/product-md-sync/ that performs bidirectional conversion between Mercari CSV exports (162 columns) and Markdown files for managing existing product descriptions. It enables easy editing of product information outside of the CSV format, then syncs changes back to the original CSV for re-upload to Mercari Shops.

Key Implementation Points:

  • Location: /sub-packages/product-md-sync/
  • Commands: Available from root directory via pnpm mdsync:csv-to-md and pnpm mdsync:md-to-csv
  • Input: 162-column CSV exports from /mercari-data/
  • Output: Updated CSV files to /mercari-data/updated/
  • Markdown files: /sub-packages/zmdpreview/docs/products/

Architecture Overview

Data Flow Diagram

graph TB
    subgraph "Input Sources"
        CSV162[Mercari CSV<br/>162 columns]
        MD[Markdown Files<br/>product-data/*.md]
    end
    
    subgraph "Core Processing"
        Parser[CSV Parser<br/>Papa Parse]
        MDParser[Markdown Parser<br/>gray-matter + marked]
        Validator[Data Validator]
        Transformer[Field Transformer]
        Mapper[Column Mapper]
    end
    
    subgraph "Output Targets"
        CSV162Updated[Updated Mercari CSV<br/>162 columns]
        MDFiles[Markdown Files<br/>UTF-8 without BOM]
    end
    
    CSV162 --> Parser
    Parser --> Validator
    Validator --> Transformer
    Transformer --> MDFiles
    
    MD --> MDParser
    MDParser --> Mapper
    Mapper --> CSV162Updated

System Architecture

graph LR
    subgraph "CLI Layer"
        CLI[Commander.js CLI]
        Commands[Commands<br/>csv2md / md2csv]
    end
    
    subgraph "Service Layer"
        CSV2MD[CSV to Markdown<br/>Converter]
        MD2CSV[Markdown to CSV<br/>Converter]
        Validator[Validation Service]
    end
    
    subgraph "Data Layer"
        CSVParser[CSV Parser]
        MDProcessor[Markdown Processor]
        FileSystem[File System Handler]
    end
    
    subgraph "Utility Layer"
        Encoder[Encoding Handler]
        Logger[Logger]
        ErrorHandler[Error Handler]
    end
    
    CLI --> Commands
    Commands --> CSV2MD
    Commands --> MD2CSV
    CSV2MD --> CSVParser
    CSV2MD --> MDProcessor
    MD2CSV --> MDProcessor
    MD2CSV --> CSVParser
    CSV2MD --> Validator
    MD2CSV --> Validator
    CSVParser --> FileSystem
    MDProcessor --> FileSystem
    FileSystem --> Encoder

File Structure

sub-packages/product-md-sync/         # Sub-package location
├── package.json
├── README.md
├── src/
│   ├── index.js                 # CLI entry point
│   ├── commands/
│   │   ├── csv2md.js            # CSV to Markdown conversion command
│   │   └── md2csv.js            # Markdown to CSV conversion command
│   ├── converters/
│   │   ├── csvToMarkdown.js    # CSV to Markdown conversion logic
│   │   └── markdownToCsv.js    # Markdown to CSV conversion logic
│   ├── parsers/
│   │   ├── csvParser.js        # CSV parsing logic
│   │   └── markdownParser.js   # Markdown parsing logic
│   ├── validators/
│   │   ├── csvValidator.js     # CSV検証
│   │   └── markdownValidator.js # Markdown検証
│   ├── mappers/
│   │   ├── fieldMapper.js      # Field mapping
│   │   └── columnMapper.js     # Column mapping definitions
│   ├── utils/
│   │   ├── encoding.js         # Encoding handling
│   │   ├── fileHandler.js      # File operations
│   │   ├── logger.js           # Logging utility
│   │   └── errorHandler.js     # Error handling
│   └── constants/
│       ├── columns.js           # Column definitions
│       └── mappings.js          # Mapping definitions
├── bin/
│   └── cli.js                  # NPM script entry point
├── config/
│   └── default.js              # Default configuration
├── tests/
│   ├── unit/
│   ├── integration/
│   └── fixtures/
└── docs/
    └── mappings.md             # Field mapping specifications

# Data directories (outside sub-package)
/mercari-data/
├── product_data_*.csv          # Input CSV files (162 columns, pattern: product_data_YYYY-MM-DD.csv)
│   # Multiple CSV files may exist but only the latest should be used
│   # Old files are kept for backup purposes
└── updated/                    # Output directory for updated CSVs

/sub-packages/zmdpreview/docs/products/  # Markdown files location

Core Module Design

1. CSV Parser Module

// src/parsers/csvParser.js
import Papa from 'papaparse';
import fs from 'fs';
import path from 'path';
import { removeBOM } from '../utils/encoding.js';

export class CSVParser {
  constructor(options = {}) {
    this.options = {
      delimiter: ',',
      header: true,
      skipEmptyLines: true,
      encoding: 'utf8',
      ...options
    };
  }

  /**
   * Find the latest CSV file in the mercari-data directory
   * Files must follow pattern: product_data_YYYY-MM-DD.csv
   */
  async findLatestCSVFile(directory) {
    const files = await fs.promises.readdir(directory);
    
    // Filter CSV files matching the pattern
    const csvFiles = files.filter(file => {
      return file.startsWith('product_data_') && file.endsWith('.csv');
    });
    
    if (csvFiles.length === 0) {
      throw new Error('No CSV files found matching pattern: product_data_YYYY-MM-DD.csv');
    }
    
    // Parse dates and find the latest
    const filesWithDates = csvFiles.map(file => {
      const match = file.match(/product_data_(\d{4})-(\d{2})-(\d{2})\.csv$/);
      if (!match) {
        return null;
      }
      
      const [, year, month, day] = match;
      const date = new Date(`${year}-${month}-${day}`);
      
      // Validate date
      if (isNaN(date.getTime())) {
        console.warn(`Invalid date in filename: ${file}`);
        return null;
      }
      
      return {
        filename: file,
        date: date,
        path: path.join(directory, file)
      };
    }).filter(item => item !== null);
    
    if (filesWithDates.length === 0) {
      throw new Error('No valid CSV files found with date pattern YYYY-MM-DD');
    }
    
    // Sort by date descending and get the latest
    filesWithDates.sort((a, b) => b.date - a.date);
    const latest = filesWithDates[0];
    
    console.log(`Found latest CSV file: ${latest.filename} (${latest.date.toISOString().split('T')[0]})`);
    
    return latest.path;
  }

  /**
   * 162列のMercari CSVを解析
   * If filePath is a directory, finds the latest CSV file automatically
   */
  async parse162(filePath) {
    // Check if filePath is a directory
    const stats = await fs.promises.stat(filePath);
    let targetFile = filePath;
    
    if (stats.isDirectory()) {
      targetFile = await this.findLatestCSVFile(filePath);
      console.log(`Using latest CSV file: ${path.basename(targetFile)}`);
    }
    
    const content = await fs.promises.readFile(targetFile, 'utf8');
    const cleanContent = removeBOM(content);
    
    const result = Papa.parse(cleanContent, {
      ...this.options,
      transformHeader: (header, index) => {
        // Normalize headers (preserve column indices)
        return this.normalizeHeader(header, index);
      },
      transform: (value, header) => {
        // Value transformation processing
        return this.transformValue(value, header);
      }
    });

    if (result.errors.length > 0) {
      throw new ParseError('CSV parsing failed', result.errors);
    }

    // Validate 162 columns
    this.validateColumnCount(result.meta.fields, 162);
    
    return {
      data: result.data,
      meta: result.meta,
      headers: result.meta.fields,
      sourceFile: targetFile
    };
  }

  /**
   * Generate updated 162-column CSV
   */
  generateUpdated162(data, outputPath) {
    const csv = Papa.unparse(data, {
      delimiter: ',',
      header: true,
      columns: data[0] ? Object.keys(data[0]) : [], // Preserve all 162 columns
      quotes: true,
      quoteChar: '"',
      escapeChar: '"',
      newline: '\r\n'
    });

    // Save with UTF-8 with BOM
    const BOM = '\ufeff';
    return fs.promises.writeFile(outputPath, BOM + csv, 'utf8');
  }

  validateColumnCount(headers, expectedCount) {
    if (headers.length !== expectedCount) {
      throw new ValidationError(
        `Expected ${expectedCount} columns, got ${headers.length}`
      );
    }
  }
}

2. Markdown Processor Module

// src/parsers/markdownParser.js
import matter from 'gray-matter';
import { marked } from 'marked';
import fs from 'fs';

export class MarkdownProcessor {
  constructor() {
    this.setupMarkedOptions();
  }

  setupMarkedOptions() {
    marked.setOptions({
      gfm: true,
      breaks: false,
      pedantic: false
    });
  }

  /**
   * Markdownファイルを解析してデータオブジェクトを返す
   */
  async parseFile(filePath) {
    const content = await fs.promises.readFile(filePath, 'utf8');
    const { data: frontmatter, content: body } = matter(content);
    
    return {
      meta: frontmatter,
      content: body,
      sections: this.parseSections(body)
    };
  }

  /**
   * データオブジェクトからMarkdownファイルを生成
   */
  async generateFile(data, outputPath) {
    const frontmatter = this.buildFrontmatter(data);
    const content = this.buildContent(data);
    
    const markdown = matter.stringify(content, frontmatter);
    
    // Save with UTF-8 without BOM
    await fs.promises.writeFile(outputPath, markdown, 'utf8');
  }

  /**
   * Markdownセクションを解析
   */
  parseSections(content) {
    const sections = {};
    const lines = content.split('\n');
    let currentSection = null;
    let sectionContent = [];

    for (const line of lines) {
      if (line.startsWith('## ')) {
        if (currentSection) {
          sections[currentSection] = sectionContent.join('\n').trim();
        }
        currentSection = line.substring(3).trim();
        sectionContent = [];
      } else if (currentSection) {
        sectionContent.push(line);
      }
    }

    if (currentSection) {
      sections[currentSection] = sectionContent.join('\n').trim();
    }

    return sections;
  }

  buildFrontmatter(data) {
    return {
      productId: data['商品ID'],
      productName: data['商品名'],
      price: parseInt(data['商品価格'], 10),
      status: this.mapStatus(data['公開ステータス']),
      lastUpdated: new Date().toISOString(),
      categories: this.parseCategories(data)
    };
  }

  buildContent(data) {
    const sections = [];
    
    // Product description section
    sections.push('## 商品説明\n');
    sections.push(data['商品説明'] || '');
    
    // Product details section
    if (data['商品の詳細']) {
      sections.push('\n## 商品詳細\n');
      sections.push(this.formatDetails(data));
    }
    
    // Shipping information section
    sections.push('\n## 配送情報\n');
    sections.push(this.formatShipping(data));
    
    return sections.join('\n');
  }
}

3. Field Mapper Module

// src/mappers/fieldMapper.js
export class FieldMapper {
  constructor() {
    this.mappings = this.loadMappings();
  }

  loadMappings() {
    return {
      // 基本情報
      'product_id': { csv: '商品ID', markdown: 'productId' },
      'product_name': { csv: '商品名', markdown: 'productName' },
      'price': { csv: '商品価格', markdown: 'price', type: 'number' },
      
      // Status mapping (numeric code conversion)
      'status': {
        csv: '公開ステータス',
        markdown: 'status',
        transform: {
          toMarkdown: (value) => {
            const statusMap = {
              '1': 'public',
              '2': 'draft',
              '3': 'sold',
              '0': 'deleted'
            };
            return statusMap[value] || 'unknown';
          },
          toCsv: (value) => {
            const statusMap = {
              'public': '1',
              'draft': '2',
              'sold': '3',
              'deleted': '0'
            };
            return statusMap[value] || '2';
          }
        }
      },
      
      // Category mapping
      'category': {
        csv: ['カテゴリID', 'カテゴリ名'],
        markdown: 'categories',
        transform: {
          toMarkdown: (catId, catName) => {
            return {
              id: catId,
              name: catName,
              path: this.buildCategoryPath(catId, catName)
            };
          },
          toCsv: (categories) => {
            if (!categories || !categories[0]) return ['', ''];
            return [categories[0].id, categories[0].name];
          }
        }
      },
      
      // Image mapping (max 10 images)
      'images': {
        csv: Array.from({length: 10}, (_, i) => `商品画像URL${i + 1}`),
        markdown: 'images',
        transform: {
          toMarkdown: (urls) => urls.filter(url => url),
          toCsv: (images) => {
            const urls = Array(10).fill('');
            (images || []).forEach((url, i) => {
              if (i < 10) urls[i] = url;
            });
            return urls;
          }
        }
      }
    };
  }

  /**
   * CSV行をMarkdownデータに変換
   */
  mapCsvToMarkdown(csvRow) {
    const markdownData = {};
    
    for (const [key, mapping] of Object.entries(this.mappings)) {
      if (Array.isArray(mapping.csv)) {
        // Multiple CSV columns to one Markdown field
        const values = mapping.csv.map(col => csvRow[col]);
        if (mapping.transform?.toMarkdown) {
          markdownData[mapping.markdown] = mapping.transform.toMarkdown(...values);
        } else {
          markdownData[mapping.markdown] = values;
        }
      } else {
        // One-to-one mapping
        const value = csvRow[mapping.csv];
        if (mapping.transform?.toMarkdown) {
          markdownData[mapping.markdown] = mapping.transform.toMarkdown(value);
        } else if (mapping.type === 'number') {
          markdownData[mapping.markdown] = parseInt(value, 10) || 0;
        } else {
          markdownData[mapping.markdown] = value;
        }
      }
    }
    
    return markdownData;
  }

  /**
   * MarkdownデータをCSV行に変換
   */
  mapMarkdownToCsv(markdownData) {
    const csvRow = {};
    
    for (const [key, mapping] of Object.entries(this.mappings)) {
      const value = markdownData[mapping.markdown];
      
      if (Array.isArray(mapping.csv)) {
        // One Markdown field to multiple CSV columns
        const csvValues = mapping.transform?.toCsv
          ? mapping.transform.toCsv(value)
          : Array(mapping.csv.length).fill('');
        
        mapping.csv.forEach((col, i) => {
          csvRow[col] = csvValues[i] || '';
        });
      } else {
        // One-to-one mapping
        if (mapping.transform?.toCsv) {
          csvRow[mapping.csv] = mapping.transform.toCsv(value);
        } else {
          csvRow[mapping.csv] = value?.toString() || '';
        }
      }
    }
    
    // Preserve all existing CSV fields
    this.preserveOriginalFields(csvRow, originalRow);
    
    return csvRow;
  }

  preserveOriginalFields(csvRow, originalRow) {
    // Copy all fields from original that weren't updated
    Object.keys(originalRow).forEach(key => {
      if (!(key in csvRow)) {
        csvRow[key] = originalRow[key];
      }
    });
  }
}

4. Validator Module

// src/validators/csvValidator.js
export class CSVValidator {
  constructor() {
    this.requiredFields = [
      '商品ID',
      '商品名',
      '商品価格',
      '公開ステータス'
    ];
    
    this.fieldValidators = {
      '商品価格': this.validatePrice,
      'publicStatus': this.validateStatus,
      '在庫数': this.validateStock
    };
  }

  /**
   * CSV行の検証
   */
  validateRow(row, rowIndex) {
    const errors = [];
    
    // Required fields check
    for (const field of this.requiredFields) {
      if (!row[field] || row[field].trim() === '') {
        errors.push({
          row: rowIndex,
          field,
          message: `Required field "${field}" is empty`
        });
      }
    }
    
    // Field-specific validation
    for (const [field, validator] of Object.entries(this.fieldValidators)) {
      if (row[field]) {
        const error = validator(row[field], field, rowIndex);
        if (error) errors.push(error);
      }
    }
    
    return errors;
  }

  validatePrice(value, field, row) {
    const price = parseInt(value, 10);
    if (isNaN(price) || price < 300 || price > 9999999) {
      return {
        row,
        field,
        message: `Invalid price: ${value} (must be 300-9999999)`
      };
    }
    return null;
  }

  validateStatus(value, field, row) {
    const validStatuses = ['0', '1', '2', '3'];
    if (!validStatuses.includes(value)) {
      return {
        row,
        field,
        message: `Invalid status: ${value} (must be 0-3)`
      };
    }
    return null;
  }

  validateStock(value, field, row) {
    const stock = parseInt(value, 10);
    if (isNaN(stock) || stock < 0 || stock > 999) {
      return {
        row,
        field,
        message: `Invalid stock: ${value} (must be 0-999)`
      };
    }
    return null;
  }

  /**
   * CSV全体の検証
   */
  validateCSV(data, expectedColumns = 162) {
    const errors = [];
    
    // Column count check
    const columnCount = Object.keys(data[0] || {}).length;
    if (columnCount !== expectedColumns) {
      errors.push({
        type: 'structure',
        message: `Expected ${expectedColumns} columns, found ${columnCount}`
      });
    }
    
    // Validate each row
    data.forEach((row, index) => {
      const rowErrors = this.validateRow(row, index + 2); // +2 for header and 0-index
      errors.push(...rowErrors);
    });
    
    return {
      valid: errors.length === 0,
      errors,
      summary: {
        totalRows: data.length,
        errorCount: errors.length,
        affectedRows: [...new Set(errors.map(e => e.row))].length
      }
    };
  }
}

5. Encoding Handler

// src/utils/encoding.js
import iconv from 'iconv-lite';
import chardet from 'chardet';

export class EncodingHandler {
  /**
   * BOMを検出して削除
   */
  static removeBOM(content) {
    if (content.charCodeAt(0) === 0xFEFF) {
      return content.slice(1);
    }
    return content;
  }

  /**
   * BOMを追加
   */
  static addBOM(content) {
    return '\ufeff' + content;
  }

  /**
   * ファイルのエンコーディングを検出
   */
  static async detectEncoding(filePath) {
    return await chardet.detectFile(filePath);
  }

  /**
   * エンコーディング変換
   */
  static convertEncoding(buffer, fromEncoding, toEncoding) {
    const content = iconv.decode(buffer, fromEncoding);
    return iconv.encode(content, toEncoding);
  }

  /**
   * CSV用のUTF-8 with BOMで保存
   */
  static async saveCSVWithBOM(content, filePath) {
    const withBOM = this.addBOM(content);
    await fs.promises.writeFile(filePath, withBOM, 'utf8');
  }

  /**
   * Markdown用のUTF-8 without BOMで保存
   */
  static async saveMarkdownWithoutBOM(content, filePath) {
    const withoutBOM = this.removeBOM(content);
    await fs.promises.writeFile(filePath, withoutBOM, 'utf8');
  }
}

実装計画

Phase 1: Foundation Building (Week 1)

  1. Initial Project Setup
  • Node.js project setup
  • Install dependencies
  • ESLint/Prettier設定
  • Basic directory structure
  1. Core Utility Implementation
  • Encoding Handler
  • File Handler
  • Logger
  • Error Handler
  1. Constants and Mapping Definitions
  • 162-column definitions
  • Column definitions for CSV
  • Field mapping specifications

Phase 2: CSV to Markdown Conversion (Week 2)

  1. CSV Parser実装
  • Papa Parse統合
  • Read 162-column CSV
  • BOM処理
  • Data validation
  1. Markdown Generator実装
  • gray-matter統合
  • Frontmatter生成
  • Content section construction
  • File saving (UTF-8 without BOM)
  1. Field Mapper実装
  • CSV to Markdown mapping
  • Data type conversion
  • Status code conversion
  1. Integration Tests
  • Conversion tests with sample CSV
  • Error case tests

Phase 3: Markdown to CSV Conversion (Week 3)

  1. Markdown Parser実装
  • Read Markdown files
  • Parse frontmatter
  • Parse sections
  1. CSV Generator実装
  • Updated CSV format generation
  • Papa Parse unparse
  • Save with UTF-8 with BOM
  1. Reverse Mapping Implementation
  • Markdown to CSV mapping
  • Set default values
  • Complete required fields
  1. Validation Enhancement
  • Bidirectional conversion integrity check
  • Data loss detection

Phase 4: CLI Implementation (Week 4)

  1. Commander.js統合
  • Command-line argument processing
  • Option configuration
  • Help generation
  1. Batch Processing Implementation
  • Multiple file processing
  • 進捗表示
  • Error aggregation
  1. Logging and Reports
  • Conversion result summary
  • Error report generation
  • 処理統計

Phase 5: Testing and Optimization (Week 5)

  1. Comprehensive Tests
  • Unit tests
  • Integration tests
  • E2Eテスト
  1. Performance Optimization
  • 大量データ処理
  • メモリ使用最適化
  • 並列処理
  1. Documentation Preparation
  • APIドキュメント
  • 使用ガイド
  • トラブルシューティング

Dependencies

Core Dependencies

{
  "dependencies": {
    "papaparse": "^5.4.1",
    "gray-matter": "^4.0.3",
    "marked": "^4.3.0",
    "commander": "^11.0.0",
    "chalk": "^5.3.0",
    "ora": "^6.3.1",
    "iconv-lite": "^0.6.3",
    "chardet": "^2.0.0"
  },
  "devDependencies": {
    "@types/node": "^20.0.0",
    "eslint": "^8.50.0",
    "prettier": "^3.0.0",
    "jest": "^29.0.0",
    "nodemon": "^3.0.0"
  }
}

Library Selection Rationale

  • Papa Parse: 高性能CSV処理、大容量ファイル対応、ストリーミング対応
  • gray-matter: Frontmatter処理のデファクトスタンダード
  • marked: 高速Markdownパーサー、拡張性
  • commander: CLIフレームワーク、豊富な機能
  • iconv-lite: エンコーディング変換、日本語対応
  • chardet: エンコーディング自動検出

Error Handling Strategy

Error Types and Handling

  1. Parsing Errors
  • CSV形式不正 → 詳細なエラー位置表示
  • Markdown構文エラー → 該当行表示と修正提案
  1. Validation Errors
  • 必須フィールド欠落 → デフォルト値補完または警告
  • データ型不一致 → 型変換または拒否
  1. Conversion Errors
  • マッピング失敗 → フォールバック値使用
  • エンコーディング問題 → 自動検出と変換
  1. System Errors
  • ファイルアクセス → リトライ機構
  • メモリ不足 → ストリーミング処理

Error Report Format

{
  "timestamp": "2024-01-15T10:30:00Z",
  "type": "validation",
  "severity": "error",
  "file": "products.csv",
  "location": {
    "row": 125,
    "column": "商品価格"
  },
  "message": "Invalid price value: '無料'",
  "suggestion": "Price must be a number between 300 and 9999999",
  "context": {
    "productId": "PROD-123",
    "productName": "サンプル商品"
  }
}

Security Considerations

  1. File Path Validation
  • Path Traversal攻撃防止
  • 絶対パス使用禁止
  1. Input Sanitization
  • HTMLエスケープ
  • SQLインジェクション対策
  1. Encoding Security
  • 文字化け防止
  • 不正な文字コード拒否

Performance Optimization

Memory Management

// Streaming processing example
import { pipeline } from 'stream/promises';
import { createReadStream, createWriteStream } from 'fs';
import { Transform } from 'stream';

class CSVTransformStream extends Transform {
  constructor(options) {
    super(options);
    this.parser = new CSVParser();
    this.converter = new MarkdownConverter();
  }

  _transform(chunk, encoding, callback) {
    try {
      const rows = this.parser.parseChunk(chunk);
      const markdowns = rows.map(row => this.converter.convert(row));
      callback(null, markdowns.join('\n'));
    } catch (error) {
      callback(error);
    }
  }
}

// Large file processing
async function processLargeCSV(inputPath, outputPath) {
  await pipeline(
    createReadStream(inputPath),
    new CSVTransformStream(),
    createWriteStream(outputPath)
  );
}

並列処理

// Parallel processing of multiple files
import { Worker } from 'worker_threads';
import os from 'os';

class ParallelProcessor {
  constructor() {
    this.maxWorkers = os.cpus().length;
    this.workers = [];
  }

  async processFiles(files) {
    const chunks = this.chunkArray(files, this.maxWorkers);
    const promises = chunks.map(chunk => 
      this.processChunk(chunk)
    );
    
    return await Promise.all(promises);
  }

  async processChunk(files) {
    return new Promise((resolve, reject) => {
      const worker = new Worker('./src/workers/converter.js', {
        workerData: { files }
      });
      
      worker.on('message', resolve);
      worker.on('error', reject);
    });
  }
}

Monitoring and Logging

Log Levels

// src/utils/logger.js
export class Logger {
  static levels = {
    ERROR: 0,
    WARN: 1,
    INFO: 2,
    DEBUG: 3,
    TRACE: 4
  };

  constructor(level = 'INFO') {
    this.level = Logger.levels[level];
  }

  error(message, context = {}) {
    this.log('ERROR', message, context);
  }

  warn(message, context = {}) {
    this.log('WARN', message, context);
  }

  info(message, context = {}) {
    this.log('INFO', message, context);
  }

  debug(message, context = {}) {
    this.log('DEBUG', message, context);
  }

  log(level, message, context) {
    if (Logger.levels[level] <= this.level) {
      const entry = {
        timestamp: new Date().toISOString(),
        level,
        message,
        ...context
      };
      
      console.log(JSON.stringify(entry));
      
      // File logging
      this.writeToFile(entry);
    }
  }

  async writeToFile(entry) {
    const logFile = `./logs/${entry.level.toLowerCase()}-${
      new Date().toISOString().split('T')[0]
    }.log`;
    
    await fs.appendFile(
      logFile,
      JSON.stringify(entry) + '\n',
      'utf8'
    );
  }
}

Future Extension Plans

  1. API化
  • REST API提供
  • WebSocket リアルタイム変換
  • GraphQL対応
  1. GUI開発
  • デスクトップアプリ
  • Web UIダッシュボード
  • リアルタイムプレビュー
  1. Cross-Platform Support
  • Yahoo!ショッピング
  • 楽天市場
  • BASE
  1. AI統合
  • 商品説明自動生成
  • カテゴリ自動分類
  • 価格最適化提案

Summary

Product Markdown Sync is a comprehensive solution that enables bidirectional conversion between Mercari CSV exports and Markdown files. It properly implements encoding handling, data validation, and error handling, providing a highly extensible architecture capable of handling large-scale data processing.