How to Benchmark Boost Spirit Parser

How to benchmark Boost Spirit Parser?

I have given things a quick scan.

My profiler quickly told me that constructing the grammar and (especially) the lexer object took quite some resources.

Indeed, just changing a single line in SpiritParser.cpp saved 40% of execution time1 (~28s down to ~17s):

    lexer::Lexer lexer;

into

    static const lexer::Lexer lexer;

Now,

  • making the grammar static involves making it stateless. I made this happen by

    • moving position_begin into qi::_a (using qi::locals) and
    • passing it in as an inherited attribute at the appropriate times

      • the EDDIGrammar and ValueGrammar grammars, e.g.

        start %= qi::eps [ qi::_a = qi::_r1 ] >> program;
      • as well as the individual rules from ValueGrammar that are being used externally).

    This had a number of suboptimal side effects:

    • rule debugging is commented out because the lexer::pos_iterator_type has no default output streaming overload
    • the qi::position(position_begin) expression has been 'faked' with a rather elaborate replacement:

      auto local_pos = qi::lazy (
      boost::phoenix::construct<qi::position>(qi::_a)
      );

      That doesn't seem ideal. (Ideally, one would like to replace qi::position by a modified custom parser directive that knows how to get get begin_position from qi::locals (?) so there would be no need to invoke a parser expression lazily?)

Anyways, implementing these further changes2 shaved off another ~15% of execution time:

static const lexer::Lexer lexer;
static const parser::EddiGrammar grammar(lexer);

try {
bool r = spirit::lex::tokenize_and_parse(
position_begin, position_end,
lexer,
grammar(boost::phoenix::cref(position_begin)),
program);

Loose ideas:

  • Have you considered generating a static lexer (Generating the Static Analyzer)
  • Have you considered using expectation points to potentially reduce the amount of backtracking (note: I didn't measure anything in that area)
  • Have you considered alternatives for Position::file and Position::theLine? Copying the strings seems heavier than necessary. I'd prefer to store const char *. You could also look at Boost Flyweight
  • Is the pre-skip really required inside your qi::position directive?
  • (Somewhat non-serious: have you considered porting to Spirit X3? It seems to promise potential benefits in the form of move semantics.)

Hope this helps.


[1] When parsing all test cases in test/cases/*.eddi 100 times like so (github):

for (int i=0; i<100; i++)
for (auto& fname : argv)
{
eddic::ast::SourceFile program;
std::cout << fname << ": " << std::boolalpha << parser.parse(fname, program, nullptr) << "\n";
}

Timed with a simple

time ./test ../../test/cases/*.eddi | md5sum

With the md5sum acting as a sanity check.

[2] I created a pull request with the proof-of-concept refactorings here https://github.com/wichtounet/eddic/pull/52

Why does using a stream in boost spirit penalize performance so much?

Okay, in the code posted, 70%¹ of time is spent in the stream's underflow operation.

I haven't looked into /why/ that is, but instead² wrote a few naive implementations to see whether I could do better. First steps:

² Update I've since analyzed it and provided a PR.

The improvement created by that PR does not affect the bottom line in this particular case (see SUMMARY)

  • drop operator>> for Timestamp (we won't be using that)
  • replace all instances of '[' >> stream >> ']' with the alternative '[' >> raw[*~char_(']')] >> ']' so that we will always be using the trait to transform the iterator range into the attribute type (std::string or Timestamp)

Now, we implement the assign_to_attribute_from_iterators<structs::Timestamp, It> trait:

Variant 1: Array Source

template <typename It>
struct assign_to_attribute_from_iterators<structs::Timestamp, It, void> {
static inline void call(It f, It l, structs::Timestamp& time) {
boost::iostreams::stream<boost::iostreams::array_source> stream(f, l);

struct std::tm tm;
if (stream >> std::get_time(&tm, "%Y-%b-%d %H:%M:%S") >> time.ms)
time.date = std::mktime(&tm);
else throw "Parse failure";
}
};

Profiling with callgrind: (click for zoom)

It does improve considerably, probably because we make the assumption that the underlying char-buffer is contiguous, where the Spirit implementation cannot make that assumption. We spend ~42% of the time in time_get.

Roughly speaking, 25% of time is devoted to locale stuff, of which a worrying ~20% is spent doing dynamic casts :(

Variant 2: Array Source with re-use

Same, but reusing a static stream instance to see whether it makes a significant difference:

static boost::iostreams::stream<boost::iostreams::array_source> s_stream;

template <typename It>
struct assign_to_attribute_from_iterators<structs::Timestamp, It, void> {
static inline void call(It f, It l, structs::Timestamp& time) {
struct std::tm tm;

if (s_stream.is_open()) s_stream.close();
s_stream.clear();
boost::iostreams::array_source as(f, l);
s_stream.open(as);

if (s_stream >> std::get_time(&tm, "%Y-%b-%d %H:%M:%S") >> time.ms)
time.date = std::mktime(&tm);
else throw "Parse failure";
}
};

Profiling reveals is no significant difference).

Variant 3: strptime and strtod/from_chars

Let's see if dropping to C-level reduces the locale hurt:

template <typename It>
struct assign_to_attribute_from_iterators<structs::Timestamp, It, void> {
static inline void call(It f, It l, structs::Timestamp& time) {
struct std::tm tm;
auto remain = strptime(&*f, "%Y-%b-%d %H:%M:%S", &tm);
time.date = std::mktime(&tm);

#if __has_include(<charconv>) || __cpp_lib_to_chars >= 201611
auto result = std::from_chars(&*f, &*l, time.ms); // using <charconv> from c++17
#else
char* end;
time.ms = std::strtod(remain, &end);

assert(end > remain);
static_cast<void>(l); // unused
#endif
}
};

As you can see, using strtod is a bit suboptimal here. The input range is bounded, but there's no way to tell strtod about that. I have not been able to profile the from_chars approach, which is strictly safer because it doesn't have this issue.

In practice for your sample code it is safe to use strtod because we know the input buffer is NUL-terminated.

Here you can see that parsing the date-time is still a factor of concern:

  • mktime 15.58 %
  • strptime 40.54 %
  • strtod 5.88 %

But all in all the difference is less egregious now:

  • Parser1: 14.17 %
  • Parser2: 43.44 %
  • Parser3: 5.69 %
  • Parser4: 35.49 %

Variant 4: Boost DateTime Again

Interestingly, the performance of the "lowlevel" C-APIs is not far from using the much more highlevel Boost posix_time::ptime functions:

template <typename It>
struct assign_to_attribute_from_iterators<structs::Timestamp, It, void> {
static inline void call(It f, It l, structs::Timestamp& time) {
time.date = to_time_t(boost::posix_time::time_from_string(std::string(f,l)));
}
};

This might sacrifice some precision, according to the docs:

Sample Image

Here, the total time spent parsing date and time is 68%. The relative speeds of the parsers are close to the last ones:

  • Parser1: 12.33 %
  • Parser2: 43.86 %
  • Parser3: 5.22 %
  • Parser4: 37.43 %

SUMMARY

All in all, it turns out that storing the strings seems faster, even if you risk allocating more. I've done a very simple check whether this could be down to SSO by increasing the length of the substring:

static const std::string input1 = "[2018-Mar-01 00:01:02.012345 THWARTING THE SMALL STRING OPTIMIZATION HERE THIS WON'T FIT, NO DOUBT] - 1.000 s => String: Valid_string\n";
static const std::string input2 = "[2018-Mar-02 00:01:02.012345 THWARTING THE SMALL STRING OPTIMIZATION HERE THIS WON'T FIT, NO DOUBT] - 2.000 s => I dont care\n";

There was no significant impact, so that leaves the parsing itself.

It seems clear that either you will want to delay parsing the time (Parser3 is by far the quickest) or should go with the time-tested Boost posix_time functions.

LISTING

Here's the combined benchmark code I used. A few things changed:

  • added some sanity check output (to avoid testing nonsensical code)
  • made the iterator generic (changing to char* has no significant effect on performance in optimized builds)
  • the above variants are all manually switchable in the code by changing #if 1 to #if 0 in the right spots
  • reduced N1/N2 for convenience

I've liberally used C++14 because the purpose of the code was to find bottlenecks. Any wisdom gained can be backported relatively easily after the profiling.

Live On Coliru

#include <boost/fusion/adapted/struct.hpp>
#include <boost/spirit/include/qi.hpp>
#include <boost/spirit/include/phoenix.hpp>
#include <boost/spirit/repository/include/qi_seek.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
#include <boost/chrono/chrono.hpp>
#include <iomanip>
#include <ctime>
#if __has_include(<charconv>) || __cpp_lib_to_chars >= 201611
# include <charconv> // not supported yet until GCC 8
#endif

namespace structs {
struct Timestamp {
std::time_t date;
double ms;
};

struct Record1 {
std::string date;
double time;
std::string str;
};

struct Record2 {
Timestamp date;
double time;
std::string str;
};

typedef std::vector<Record1> Records1;
typedef std::vector<Record2> Records2;
}

BOOST_FUSION_ADAPT_STRUCT(structs::Record1,
(std::string, date)
(double, time)
(std::string, str))

BOOST_FUSION_ADAPT_STRUCT(structs::Record2,
(structs::Timestamp, date)
(double, time)
(std::string, str))

namespace boost { namespace spirit { namespace traits {
template <typename It>
struct assign_to_attribute_from_iterators<std::string, It, void> {
static inline void call(It f, It l, std::string& attr) {
attr = std::string(&*f, std::distance(f,l));
}
};

static boost::iostreams::stream<boost::iostreams::array_source> s_stream;

template <typename It>
struct assign_to_attribute_from_iterators<structs::Timestamp, It, void> {
static inline void call(It f, It l, structs::Timestamp& time) {
#if 1
time.date = to_time_t(boost::posix_time::time_from_string(std::string(f,l)));
#elif 1
struct std::tm tm;
boost::iostreams::stream<boost::iostreams::array_source> stream(f, l);

if (stream >> std::get_time(&tm, "%Y-%b-%d %H:%M:%S") >> time.ms)
time.date = std::mktime(&tm);
else
throw "Parse failure";
#elif 1
struct std::tm tm;
if (s_stream.is_open()) s_stream.close();
s_stream.clear();
boost::iostreams::array_source as(f, l);
s_stream.open(as);

if (s_stream >> std::get_time(&tm, "%Y-%b-%d %H:%M:%S") >> time.ms)
time.date = std::mktime(&tm);
else
throw "Parse failure";
#else
struct std::tm tm;
auto remain = strptime(&*f, "%Y-%b-%d %H:%M:%S", &tm);
time.date = std::mktime(&tm);

#if __has_include(<charconv>) || __cpp_lib_to_chars >= 201611
auto result = std::from_chars(&*f, &*l, time.ms); // using <charconv> from c++17
#else
char* end;
time.ms = std::strtod(remain, &end);

assert(end > remain);
static_cast<void>(l); // unused
#endif
#endif
}
};
} } }

namespace qi = boost::spirit::qi;

namespace QiParsers {
template <typename It>
struct Parser1 : qi::grammar<It, structs::Record1()>
{
Parser1() : Parser1::base_type(start) {
using namespace qi;

start = '[' >> raw[*~char_(']')] >> ']'
>> " - " >> double_ >> " s"
>> " => String: " >> raw[+graph]
>> eol;
}

private:
qi::rule<It, structs::Record1()> start;
};

template <typename It>
struct Parser2 : qi::grammar<It, structs::Record2()>
{
Parser2() : Parser2::base_type(start) {
using namespace qi;

start = '[' >> raw[*~char_(']')] >> ']'
>> " - " >> double_ >> " s"
>> " => String: " >> raw[+graph]
>> eol;
}

private:
qi::rule<It, structs::Record2()> start;
};

template <typename It>
struct Parser3 : qi::grammar<It, structs::Records1()>
{
Parser3() : Parser3::base_type(start) {
using namespace qi;
using boost::phoenix::push_back;

line = '[' >> raw[*~char_(']')] >> ']'
>> " - " >> double_ >> " s"
>> " => String: " >> raw[+graph];

ignore = *~char_("\r\n");

start = (line[push_back(_val, _1)] | ignore) % eol;
}

private:
qi::rule<It> ignore;
qi::rule<It, structs::Record1()> line;
qi::rule<It, structs::Records1()> start;
};

template <typename It>
struct Parser4 : qi::grammar<It, structs::Records2()>
{
Parser4() : Parser4::base_type(start) {
using namespace qi;
using boost::phoenix::push_back;

line = '[' >> raw[*~char_(']')] >> ']'
>> " - " >> double_ >> " s"
>> " => String: " >> raw[+graph];

ignore = *~char_("\r\n");

start = (line[push_back(_val, _1)] | ignore) % eol;
}

private:
qi::rule<It> ignore;
qi::rule<It, structs::Record2()> line;
qi::rule<It, structs::Records2()> start;
};
}

template <typename Parser> static const Parser s_instance {};

template<template <typename> class Parser, typename Container, typename It>
Container parse_seek(It b, It e, const std::string& message)
{
Container records;

auto const t0 = boost::chrono::high_resolution_clock::now();
parse(b, e, *boost::spirit::repository::qi::seek[s_instance<Parser<It> >], records);
auto const t1 = boost::chrono::high_resolution_clock::now();

auto elapsed = boost::chrono::duration_cast<boost::chrono::milliseconds>(t1 - t0);
std::cout << "Elapsed time: " << elapsed.count() << " ms (" << message << ")\n";

return records;
}

template<template <typename> class Parser, typename Container, typename It>
Container parse_ignoring(It b, It e, const std::string& message)
{
Container records;

auto const t0 = boost::chrono::high_resolution_clock::now();
parse(b, e, s_instance<Parser<It> >, records);
auto const t1 = boost::chrono::high_resolution_clock::now();

auto elapsed = boost::chrono::duration_cast<boost::chrono::milliseconds>(t1 - t0);
std::cout << "Elapsed time: " << elapsed.count() << " ms (" << message << ")\n";

return records;
}

static const std::string input1 = "[2018-Mar-01 00:01:02.012345] - 1.000 s => String: Valid_string\n";
static const std::string input2 = "[2018-Mar-02 00:01:02.012345] - 2.000 s => I dont care\n";

std::string prepare_input() {
std::string input;
const int N1 = 10;
const int N2 = 1000;

input.reserve(N1 * (input1.size() + N2*input2.size()));

for (int i = N1; i--;) {
input += input1;
for (int j = N2; j--;)
input += input2;
}

return input;
}

int main() {
auto const input = prepare_input();

auto f = input.data(), l = f + input.length();

for (auto& r: parse_seek<QiParsers::Parser1, structs::Records1>(f, l, "std::string + seek")) {
std::cout << r.date << "\n";
break;
}
for (auto& r: parse_seek<QiParsers::Parser2, structs::Records2>(f, l, "stream + seek")) {
auto tm = *std::localtime(&r.date.date);
std::cout << std::put_time(&tm, "%Y-%b-%d %H:%M:%S") << "\n";
break;
}
for (auto& r: parse_ignoring<QiParsers::Parser3, structs::Records1>(f, l, "std::string + ignoring")) {
std::cout << r.date << "\n";
break;
}
for (auto& r: parse_ignoring<QiParsers::Parser4, structs::Records2>(f, l, "stream + ignoring")) {
auto tm = *std::localtime(&r.date.date);
std::cout << std::put_time(&tm, "%Y-%b-%d %H:%M:%S") << "\n";
break;
}
}

Printing something like

Elapsed time: 14 ms (std::string + seek)
2018-Mar-01 00:01:02.012345
Elapsed time: 29 ms (stream + seek)
2018-Mar-01 00:01:02
Elapsed time: 2 ms (std::string + ignoring)
2018-Mar-01 00:01:02.012345
Elapsed time: 22 ms (stream + ignoring)
2018-Mar-01 00:01:02

¹ all percentages are relative to total program cost. That does skew the percentages (the 70% mentioned would be even worse if the non-stream parser tests weren't taken into account), but the numbers are a good enough guide for relative comparions within a test run.

Boost Spirit: slow parsing optimization

I've hooked your grammar into a Nonius benchmark and generated uniformly random input data of ~85k lines (download: http://stackoverflow-sehe.s3.amazonaws.com/input.txt, 7.4 MB).

  • are you measuring time in a release build?
  • are you using slow file input?

When reading the file up-front I consistently get a time of ~36ms to parse the whole bunch.

clock resolution: mean is 17.616 ns (40960002 iterations)

benchmarking sample
collecting 100 samples, 1 iterations each, in estimated 3.82932 s
mean: 36.0971 ms, lb 35.9127 ms, ub 36.4456 ms, ci 0.95
std dev: 1252.71 μs, lb 762.716 μs, ub 2.003 ms, ci 0.95
found 6 outliers among 100 samples (6%)
variance is moderately inflated by outliers

Code: see below.


Notes:

  • you seem conflicted on the use of skippers and seek together. I'd suggest you simplify prefix:

    comment     = '#' >> *(qi::char_ - qi::eol);

    prefix = repo::seek[
    qi::lit("point") >> '[' >> *comment
    ];

    prefix will use the space skipper, and ignore any matched attributes (because of the rule declared type). Make comment implicitly a lexeme by dropping the skipper from the rule declaration:

        // implicit lexeme:
    qi::rule<Iterator> comment;

    Note See Boost spirit skipper issues for more background information.

Live On Coliru

#include <boost/fusion/adapted/struct.hpp>
#include <boost/spirit/include/qi.hpp>
#include <boost/spirit/repository/include/qi_seek.hpp>

namespace qi = boost::spirit::qi;
namespace repo = boost::spirit::repository;

struct Point { double a = 0, b = 0, c = 0; };

BOOST_FUSION_ADAPT_STRUCT(Point, a, b, c)

template <typename Iterator>
struct PointParser : public qi::grammar<Iterator, std::vector<Point>(), qi::space_type>
{
PointParser() : PointParser::base_type(start, "PointGrammar")
{
singlePoint = qi::double_ >> qi::double_ >> qi::double_ >> *qi::lit(',');

comment = '#' >> *(qi::char_ - qi::eol);

prefix = repo::seek[
qi::lit("point") >> '[' >> *comment
];

//prefix = repo::seek[qi::lexeme[qi::skip[qi::lit("point")>>qi::lit("[")>>*comment]]];

start %= prefix >> *singlePoint;

//BOOST_SPIRIT_DEBUG_NODES((prefix)(comment)(singlePoint)(start));
}

private:
qi::rule<Iterator, Point(), qi::space_type> singlePoint;
qi::rule<Iterator, std::vector<Point>(), qi::space_type> start;
qi::rule<Iterator, qi::space_type> prefix;
// implicit lexeme:
qi::rule<Iterator> comment;
};

#include <nonius/benchmark.h++>
#include <nonius/main.h++>
#include <boost/iostreams/device/mapped_file.hpp>

static boost::iostreams::mapped_file_source src("input.txt");

NONIUS_BENCHMARK("sample", [](nonius::chronometer cm) {
std::vector<Point> points;

using It = char const*;
PointParser<It> g2;

cm.measure([&](int) {
It f = src.begin(), l = src.end();
return phrase_parse(f, l, g2, qi::space, points);
bool ok = phrase_parse(f, l, g2, qi::space, points);
if (ok)
std::cout << "Parsed " << points.size() << " points\n";
else
std::cout << "Parsed failed\n";

if (f!=l)
std::cout << "Remaining unparsed input: '" << std::string(f,std::min(f+30, l)) << "'\n";

assert(ok);
});
})

Graph:

Sample Image

Another run output, live:

  • http://stackoverflow-sehe.s3.amazonaws.com/30dd790b-8b52-4eab-a130-8d6896207b2f.html (click all individual samples)

Boost Spirit QI slow

I found a solution to my problem. As described in this post Boost Spirit QI grammar slow for parsing delimited strings
the performance bottleneck is the string handling of Spirit qi. All other data types seem to be quite fast.

I avoid this problem through doing the handling of the data on my own instead of using the Spirit qi handling.

My solution uses a helper class which offers functions for every field of the csv file. The functions store the values into a struct. Strings are stored in a char[]s. Hits the parser a newline character it calls a function which adds the struct to the result vector.
The Boost parser calls this functions instead of storing the values into a vector on its own.

Here is my code for the region.tbl file of the TCPH Benchmark:

struct region{
int r_regionkey;
char r_name[25];
char r_comment[152];
};

class regionStorage{
public:
regionStorage(vector<region>* regions) :regions(regions), pos(0) {}
void storer_regionkey(int const&i){
currentregion.r_regionkey = i;
}

void storer_name(char const&i){
currentregion.r_name[pos] = i;
pos++;
}

void storer_comment(char const&i){
currentregion.r_comment[pos] = i;
pos++;
}

void resetPos() {
pos = 0;
}

void endOfLine() {
pos = 0;
regions->push_back(currentregion);
}

private:
vector<region>* regions;
region currentregion;
int pos;
};

void parseRegion(){

vector<region> regions;
regionStorage regionstorageObject(®ions);
phrase_parse(dataPointer, /*< start iterator >*/
state->dataEndPointer, /*< end iterator >*/
(*(lexeme[
+(int_[boost::bind(®ionStorage::storer_regionkey, ®ionstorageObject, _1)] - '|') >> '|' >>
+(char_[boost::bind(®ionStorage::storer_name, ®ionstorageObject, _1)] - '|') >> char_('|')[boost::bind(®ionStorage::resetPos, ®ionstorageObject)] >>
+(char_[boost::bind(®ionStorage::storer_comment, ®ionstorageObject, _1)] - '|') >> char_('|')[boost::bind(®ionStorage::endOfLine, ®ionstorageObject)]
])), space);

cout << regions.size() << endl;
}

It is not a pretty solution but it works and it is much faster. ( 2.2 secs for 1 GB TCPH data, multithreaded)

Boost Spirit parser rule to parse square brackets

You need to show the code. Here's a simple tester that shows that all of the parsers succeed and give the expected result:

Live On Coliru

#include <boost/spirit/include/qi.hpp>
#include <iomanip>

namespace qi = boost::spirit::qi;

int main()
{
using It = std::string::const_iterator;

for(std::string const input : {"xyz[aa:bb]:blah"})
{
qi::rule<It, std::string()> rules[] = {
+(~qi::char_("\r\n;,=")),
+(~qi::char_("\r\n;,=") | "[" | "]"),
+(qi::char_ - qi::char_("\r\n;,=")),
};
for(auto const& r : rules)
{
std::string out;
std::cout << std::boolalpha
<< qi::parse(begin(input), end(input), r, out) << " -> "
<< std::quoted(out) << "\n";
}
}
}

Printing:

true -> "xyz[aa:bb]:blah"
true -> "xyz[aa:bb]:blah"
true -> "xyz[aa:bb]:blah"

The best guesses I have, not seeing your actual code:

  • you are using a skipper that eats some of your characters (see Boost spirit skipper issues)

  • you are using an input iterator that interferes

  • you're invoking UB. To be honest, without context, this seems the most likely

If you show more code I'll happily diagnose which it is.

Getting into boost spirit; Qi or X3?

X3 is more recent, still experimental and requires C++14.

Qi is

  • more stable
  • supports more stateful options more easily
  • supports lazy parsers (which you might like)
  • is much slower to compile

The docs are

  • https://www.boost.org/doc/libs/1_68_0/libs/spirit/doc/html/spirit/qi.html
  • https://www.boost.org/doc/libs/1_68_0/libs/spirit/doc/x3/html/index.html

Boost Spirit X3 Skip Parser Implementation?

Yeah that's fine.

The skipper seems pretty optimal. You could optimize the quoted_string rule by reordering and using character set negation (operator~):

Live On Coliru

#include <boost/spirit/home/x3.hpp>

namespace parser {
namespace x3 = boost::spirit::x3;
auto const quoted_string = x3::lexeme [ '"' >> *('\\' >> x3::char_ | ~x3::char_("\"\n")) >> '"' ];
auto const space_comment = x3::space | x3::lexeme[ '#' >> *(x3::char_ - x3::eol) >> x3::eol];
}

#include <iostream>
int main() {
std::string result, s1 = "# foo\n\n#bar\n \t\"This is a simple string, containing \\\"escaped quotes\\\"\"";

phrase_parse(s1.begin(), s1.end(), parser::quoted_string, parser::space_comment, result);

std::cout << "Original: `" << s1 << "`\nResult: `" << result << "`\n";
}

Prints

Original: `# foo

#bar
"This is a simple string, containing \"escaped quotes\""`
Result: `This is a simple string, containing "escaped quotes"`

Minimizing boost::spirit compile times

I have to come to the conclusion that boost:spirit, elegant as it is, is not a viable option for many real world parsing problems due to the lengthy compile times that even experts cannot fix.

It is often best to stick to something like flex, which may be ugly and old-fashioned, but is relatively simple and lightning fast.

As an example of what I consider a 'real world' problem here is the railroad diagram of the most important part of a parser that flex compiles in a couple of seconds, but boost:spirit is still chugging away on after ten minutes

Sample Image



Related Topics



Leave a reply



Submit