commit 1122b73d3e79a2b42b8093a6e3273e4666d97ea6 Author: Anton Lydike Date: Fri Aug 5 23:02:25 2022 +0200 initial commit diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..64c6955 --- /dev/null +++ b/.gitignore @@ -0,0 +1,2 @@ +web/index.html +reviews.json \ No newline at end of file diff --git a/README.md b/README.md new file mode 100644 index 0000000..ccad69a --- /dev/null +++ b/README.md @@ -0,0 +1,10 @@ +# Pesto Blog + +This blog talks about pesto. The HTML generation is slightly convoluted though: + +I write my blog in a markdown file (blog.md) (following a relatively strict schema). + +A python script reads the markdown document and generates a JSON file. + +The JSON file is then used to generate the website. + diff --git a/blog.md b/blog.md new file mode 100644 index 0000000..f69f4ba --- /dev/null +++ b/blog.md @@ -0,0 +1,111 @@ +# Blog of Pesto + +A blog comparing every pesto I could buy + +## Technique + +### Setup: + +The Pasta was Barilla Gemelli or Girandole. The pesto was mixed with starchy pasta-water to try and create a creamy emulsion that would coat the pasta better. + +### Ratings + +Each pesto is compared in these categories: + + - taste: how it tasted + - consistency: How was the consistency, did it form a proper emulsion with the pasta water + - ingredients: what's in it, how much of it, and how is it + - price: how expensive is this pesto + - size: how much is in the glass + + +## The actual reviews: + +### Bernbacher "Pesto Calabrese" +*Date:* 2022-08-02 + +*Eaten with:* Gemelli. + + +*ingredients:* Red and yellow peppers (52%), Sunflower oil, Grana Padano Cheese (7%), Almonds (4.7%), Potato flakes, Salt, Lactic acid, spices. + + +| Category | Rating / Value | +|-------------|----------------| +| taste | 1/5 | +| consistency | 2/5 | +| ingredients | 3/5 | +| price | ??? | +| size | 140g | + +*notes:* This pesto lacked the spicieness I expect from a calabrese pesto. It was not spicy at all, in fact it had a very mild taste. In the context of pesto you could even say it did not taste like much at all. It also did not form the best emulsion and had a grainy texture, presumably from the shredded almonds. Adding Sriracha to it significantly improved the flavour. Make of that what you wish. This is not really surprising, as this pesto consists of around 30% sunflower oil, without a main ingredient carrying much taste (peppers). It's definitely a below-average to bad pesto. + +**Final verdict:** ★★☆☆☆ + +### Barilla "Pesto Vegan" (Green) +*Date:* 2022-08-02 + +*Eaten with:* Gemelli. + + +*ingredients:* Basil (35.6%), Sunflower oil, Cashews, Glucose syrup, water, modified cornstarch, salt, natural aroma, olive oil, sugar, lactic acid. + + +| Category | Rating / Value | +|-------------|----------------| +| taste | 5/5 | +| consistency | 5/5 | +| ingredients | 3/5 | +| price | | +| size | 195g | + +*notes:* This is the classic, creamy, supermarket "Pesto a la Genovese" but vegan (and even more creamy). The taste is a 5/5, it has every element you'd expect from a "Pesto a la Genovese", even though it definitely isn't, as it's missing pine nuts and Parmesan cheese. The taste is one thing, but I definitely can't understate how well it reacted to the pasta water (probably due to the modified starches). It was superbly creamy throughout and coated even the more challenging Gemelli beautifully. Sure, it could have better ingredients (more olive oil, better nuts) but you can't argue with the result. Therefore it deserves the full 5 star rating. + +**Final verdict:** ★★★★★ + +### Cucina "Premium Pesto alla Genovese" +*Date:* 2022-08-04 + +*Eaten with:* Girandole + + +*ingredients:* Basil (44%), Olive oil (18%), Sunflower oil, Grana Padano (6%), Pine nuts (4%), Cashews (4%), Pecorino Romano, salt, sugar, garlic, lactic acid, ascorbic acid. + + +| Category | Rating / Value | +|-------------|----------------| +| taste | 4/5 | +| consistency | 4/5 | +| ingredients | 4/5 | +| price | ??? | +| size | 190g | + +*notes:* This pesto tasted pretty good, although I missed the creaminess of the cheese (as there is almost no cheese at all in this pesto, and nothing to replace it). There was a considerable ammount of visible fibers, almost looking like straw-fibres. Even though it contains a lot of oil, it still coated the paste well and did not separate when I added the pasta water. This pesto definitely has a more "traditional" or "organic" look to it, but it can't quite pull it off. I rate it four out of five stars. + +**Final verdict:** ★★★★☆ + + + + +### Template Make "Name" (Variant) +*Date:* date + +*Eaten with:* noodles. + + +*ingredients:* + + +| Category | Rating / Value | +|-------------|----------------| +| taste | /5 | +| consistency | /5 | +| ingredients | /5 | +| price | | +| size | g | + +*notes:* + +**Final verdict:** ★☆ + + diff --git a/make-website.py b/make-website.py new file mode 100644 index 0000000..27544a4 --- /dev/null +++ b/make-website.py @@ -0,0 +1,66 @@ +import datetime +import json +import re +import shutil + +with open('web/templates/review.html', 'r') as f: + REVIEW_TEMPLATE = f.read() + + +def review_id(review): + return "_".join(re.sub('\s+', '-', x.lower() if x else '') for x in (review['company'], review['name'], review['variant'])) + +def review_title(review): + return '{} "{}" {}'.format( + review['company'], review['name'], + '({})'.format(review['variant']) if review['variant'] else '' + ) + +def generate_website(website_source: str, json_source: str, dest: str): + with open(website_source, 'r') as f: + website_content = f.read() + with open(json_source, 'r') as f: + data = json.load(f) + + website = populate_template_str(website_content, { + 'index': generate_index(data['reviews']), + 'pesto_ratings': '\n\n'.join(generate_review_html(review) for review in data['reviews']), + 'current_year': str(datetime.date.today().year) + }) + + with open(dest, 'w') as f: + f.write(website) + f.write(''.format(datetime.datetime.now())) + + +def generate_review_html(review: dict) -> str: + return populate_template_str(REVIEW_TEMPLATE, { + 'review_id': review_id(review) + , 'title': review_title(review) + , 'date': review['date'] + , 'notes': review['notes'] + , 'ingredients': ', '.join(review['ingredients']) + , 'rating_taste': review['rating_value']['taste'] + , 'rating_consistency': review['rating_value']['consistency'] + , 'rating_ingredients': review['rating_value']['ingredients'] + , 'rating_price': review['rating_value']['price'] + , 'rating_size': review['rating_value']['size'] + , 'rating': review['final_verdict']['string'] + }) + +def generate_index(reviews): + return "".format( + "\n".join('
  • {}
  • '.format( + review_id(review), review_title(review) + ) for review in reviews) + ) + +def populate_template_str(templatestr, fields: dict[str, str]): + def fill(match): + return fields.get(match.group(1).lower(), 'Unknown field {}'.format(match.group(1))) + + return re.sub(r'{([A-Z_]+)}', fill, templatestr) + +if __name__ == '__main__': + generate_website('web/templates/index.html', 'reviews.json', 'web/index.html') + shutil.copy('reviews.json', 'web/reviews.json') \ No newline at end of file diff --git a/parse-md.py b/parse-md.py new file mode 100644 index 0000000..f25bbc9 --- /dev/null +++ b/parse-md.py @@ -0,0 +1,335 @@ +from dataclasses import dataclass +import json +import re +from typing import Dict, Tuple +from math import ceil, log10 +import datetime + +START_OF_REVIEWS = '## The actual reviews:' + + + +# helper classes and functions + +@dataclass +class LexingContext: + sources: Dict[str,str] + + def get_nth_line_bounds(self, source_name: str, n: int): + if source_name not in self.sources: + raise KeyError("Unknown source file \"{}\"!".format(source_name)) + start = 0 + source = self.sources[source_name] + for i in range(n): + next_start = source.find('\n', start) + if next_start == -1: + return None + start = next_start + 1 + return start, source.find('\n', start) + + def get_lines_containing(self, span: 'Span'): + if span.source_name not in self.sources: + raise KeyError("Unknown source file \"{}\"!".format(span.source_name)) + start = 0 + line_no = 0 + source = self.sources[span.source_name] + while True: + next_start = source.find('\n', start) + line_no += 1 + # handle eof + if next_start == -1: + return None + # as long as the next newline comes before the spans start we are good + if next_start < span.start: + start = next_start + 1 + continue + # if the whole span is on one line, we are good as well + if next_start >= span.end: + return [ source[start:next_start] ], start, line_no + while next_start < span.end: + next_start = source.find('\n', next_start+1) + + return source[start:next_start].split('\n'), start, line_no + + + +@dataclass(frozen=True) +class Span: + start: int + """ + Start of tokens location in source file, global byte offset in file + """ + end: int + """ + End of tokens location in source file, global byte offset in file + """ + source_name: str + + context: LexingContext + + def union(self, *spans: 'Span'): + for span in spans: + assert span.source_name == self.source_name + assert span.context == self.context + return Span( + start=min(self.start, *(span.start for span in spans)), + end=max(self.end, *(span.end for span in spans)), + source_name=self.source_name, + context=self.context + ) + + def transform(self, start:int=0, end:int=0): + return Span(self.start + start, self.end + end, self.source_name, self.context) + + def __repr__(self): + return "{}(start={},end={},source_name={})".format( + self.__class__.__name__, + self.start, self.end, self.source_name + ) + + +def create_span_context_str(span: Span, message: str, color: str = '\033[31m'): + lines, offset_into_file, line_no = span.context.get_lines_containing(span) + relative_offset = span.start - offset_into_file + annotation_len = span.end - span.start + + digit_len = ceil(log10(line_no + len(lines))) + if digit_len == 0: + digit_len = 1 + + output_str = ">>> In file {}:{}\n".format(span.source_name, line_no) + + for i, source_line in enumerate(lines): + source_line = source_line[:relative_offset] + color + source_line[relative_offset:relative_offset+annotation_len] + '\033[0m' + source_line[relative_offset+annotation_len:] + output_str += '{:>{}d}: {}\n'.format(line_no + i, digit_len, source_line) + + if relative_offset > len(source_line): + continue + # TODO: handle multi-line underlines + output_str += "{}{}{}{}\n".format( + color, + ' ' * (relative_offset + digit_len + 2), + '^' * min(annotation_len, len(source_line) - relative_offset), + '\033[0m' + ) + if annotation_len > len(source_line) - relative_offset: + relative_offset = 0 + annotation_len -= len(source_line) - relative_offset + + if message: + output_str += color + output_str += ' ' * (relative_offset + digit_len + 2) + '|\n' + for message_line in message.split("\n"): + output_str += ' ' * (relative_offset + digit_len + 2) + message_line + '\n' + + return output_str + '\033[0m' + +def print_warning(span: Span, message: str, color="\033[33m"): + print(create_span_context_str(span, "Warning: " + message, color)) + +class ParseError(Exception): + span: Span + message: str + + def __init__(self, msg: str, span: Span=None) -> None: + super().__init__((msg, span)) + self.span = span + self.message = msg + + + def print_context_message(self): + if not self.span: + print("\n".join(">>> {}".format(line) for line in self.message.split('\n'))) + else: + print(create_span_context_str(self.span, self.message)) + + +class EndOfInputError(ParseError): + def __init__(self,span: Span, search_str:str = None) -> None: + + if search_str: + super().__init__(f"Unexpected end-of-input in {span.source_name} while scanning for {search_str}!", span) + else: + super().__init__(f"Unexpected end-of-input in {span.source_name}!", span) + +def to_json_field_name(field_name: str) -> str: + return re.sub(r'[^\w\d]+', '_', field_name).lower().strip('_') + +## parser + +class MarkdownBlogParser: + def __init__(self, source: str) -> None: + self.fname = source + with open(source, 'r') as f: + self.content = f.read() + self.pos = self.content.index(START_OF_REVIEWS) + len(START_OF_REVIEWS) + self.context = LexingContext({source: self.content}) + self.size = len(self.content) + self.reviews = [] + + self.consume_whitespace() + + + def peek(self, offset: int = 0): + if self.pos + offset >= self.size: + return None + return self.content[self.pos + offset] + + def startswith(self, *patterns: str, offset: int = 0): + # match longest first + for pattern in sorted(patterns, key=len, reverse=True): + if self.content.startswith(pattern, self.pos + offset): + return pattern + return False + + def consume_whitespace(self): + while self.pos < self.size and self.content[self.pos] in '\n\r\t ': + self.pos += 1 + if self.pos == self.size: + raise EndOfInputError(Span(self.pos-1, self.pos, self.fname, self.context), "Whitespace") + + + def read_until(self, pattern: str, inclusive=True) -> Tuple[str, Span]: + start = self.pos + pos = self.pos + while pos < self.size and not self.content[pos:].startswith(pattern): + pos += 1 + if pos == self.size: + raise EndOfInputError(Span(start, pos, self.fname, self.context), pattern) + + if inclusive: + pos += len(pattern) + self.pos = pos + + return self.content[start:pos], Span(start, pos, self.fname, self.context) + + def parse(self): + line, span = self.read_until('\n', inclusive=True) + result = re.fullmatch(r'### ([\w\s]+)\s+("[^"]+")[ \t]*(\([^)]+\))?\n', line) + if not result: + raise ParseError("Expected review heading of form '### Company \"pesto name\" (variant)\n'", span.transform(end=-1)) + # now we get the first bit of info! + company, name, variant = (result.group(x) for x in (1,2,3)) + self.current_review = { + 'company': company, + 'name': name.strip()[1:-1], + 'variant': variant.strip()[1:-1] if variant else None, + } + if 'template' in line.lower(): + return self.reviews + + # parse inner review fields + while self.inner_review_parse(): + pass + + # add review to global list + self.reviews.append(self.current_review) + + # and next review! + return self.parse() + + def inner_review_parse(self): + # read until next thing + self.consume_whitespace() + if self.startswith('### '): + return None + # we are done! + + # we have an item: + if self.startswith('*'): + token = '*' + if self.startswith('**'): + token = '**' + self.pos += len(token) + title, span = self.read_until(token, False) + self.pos += len(token) + if title[-1] != ':': + raise ParseError("Expected field declaration like '*Date:*'", span) + + field_name = to_json_field_name(title) + value, span = self.read_until('\n\n') + self.current_review[field_name] = value.strip() + return True + + # we have a table! how exciting! + if self.startswith('|'): + # skip headers + # TODO: validate headers + headers, span = self.read_until('\n') + headers = headers.split('|') + if not len(headers) == 4: + raise ParseError("Expected table header here (like '|Category | Rating / Score |'", span.transform(end=-1)) + table_name = to_json_field_name(headers[2]) + # skip alignment col + # TODO: validate alignment col + line, span = self.read_until('\n') + if not len(line.split('|')) == len(headers): + raise ParseError("Alignment row seems invalid, must contain the same number of '|' as headers!", span.transform(end=-1)) + + values = dict() + + while self.peek() == '|': + line, span = self.read_until('\n') + line = line.split('|') + if len(line) != len(headers): + raise ParseError("Content row seems invalid, must contain the same number of '|' as headers!", span.transform(end=-1)) + values[to_json_field_name(line[1])] = line[2].strip() + + self.current_review[table_name] = values + return True + + raise ParseError("Unexpected input!", Span(self.pos, self.pos+1, self.fname, self.context)) + + +class ReviewPostprocessor: + def __init__(self) -> None: + pass + + def process_all(self, dicts): + return [ + self.process(d) for d in dicts + ] + + def process(self, review: dict) -> dict: + def noop(input): + return input + + return { + field: getattr(self, field, noop)(value) + for field, value in review.items() + } + + def ingredients(self, ingredients: str): + return [ + x.strip() for x in ingredients.rstrip('.').split(',') + ] + + def rating_value(self, table: Dict[str, str]): + new = dict() + for key, value in table.items(): + new[key] = value + if '/' in value: + x,y = value.split('/') + new[key + '_percent'] = float(x) / float(y) + return new + + def final_verdict(self, verdict: str): + return { + 'string': verdict, + 'value': verdict.count('★') / len(verdict) + } + + +if __name__ == '__main__': + parser = MarkdownBlogParser('blog.md') + try: + reviews = ReviewPostprocessor().process_all(parser.parse()) + with open("reviews.json", 'w') as f: + json.dump({ + 'reviews': reviews, + 'created': str(datetime.date.today()) + }, f, indent=2) + except ParseError as err: + err.print_context_message() + + diff --git a/web/style.css b/web/style.css new file mode 100644 index 0000000..e69de29 diff --git a/web/templates/index.html b/web/templates/index.html new file mode 100644 index 0000000..f3e24bf --- /dev/null +++ b/web/templates/index.html @@ -0,0 +1,55 @@ + + + + + + + + Blog of Pesto + + + +
    +

    + Blog of Pesto + reviewing all sorts of pesto +

    +
    +
    +

    About this Blog

    + + I wanted to save money, so I decided to eat very inexpensive (noodles with pesto mostly). To motivate myself, I decided to try to eat and review every pesto I could buy. This is the result. + +

    Setup

    + + The Pasta was Barilla Gemelli or Girandole. The pesto was mixed with the noodles and some starchy pasta-water to try and create a creamy emulsion that would coat the pasta better. + +

    Rating

    + + Each pesto is compared in these categories: + + + + I also list the ingredients and add some notes why I rated the pasta the way I did. I then give a final grade. + +

    Index

    + + {INDEX} + +

    The Pasta

    + + {PESTO_RATINGS} + +
    + + + + \ No newline at end of file diff --git a/web/templates/review.html b/web/templates/review.html new file mode 100644 index 0000000..cea215b --- /dev/null +++ b/web/templates/review.html @@ -0,0 +1,21 @@ +

    {TITLE}

    + +

    Date: {DATE}

    + +

    Notes: {NOTES}

    + +

    Ingredients: {INGREDIENTS}

    + + + + + + + + + + + +
    Category Score / Value
    Taste {RATING_TASTE}
    Consistency {RATING_CONSISTENCY}
    Ingredients {RATING_INGREDIENTS}
    Price {RATING_PRICE}
    Size {RATING_SIZE}
    + +

    Final raintg: {RATING}

    \ No newline at end of file