[Privoxy-devel] TODO 157

Lee ler762 at gmail.com
Sun May 28 07:38:30 UTC 2017


On 5/27/17, Fabian Keil <fk at fabiankeil.de> wrote:
> Lee <ler762 at gmail.com> wrote:
>
>> On 5/25/17, Fabian Keil <fk at fabiankeil.de> wrote:
>> > Lee <ler762 at gmail.com> wrote:
>> >
>> >> On 5/24/17, Fabian Keil <fk at fabiankeil.de> wrote:
>> >
>> >> > Lee <ler762 at gmail.com> wrote on ijbswa-developers@:
>
>> >> How do you feel about turning off filtering for things that most
>> >> probably don't need it?
>>   <.. snip ..>
>> >
>> > Have you observed these file types to be frequently served
>> > with a Content-Type that Privoxy considers to be filterable?
>>
>> remember this thread
>>   https://sourceforge.net/p/ijbswa/mailman/message/26831156/
>> The last time I remember looking at it was around that time & I forgot
>> all about filtering not working on encrypted connections - I just kept
>> adding to the "don't bother filtering this" list.
>>
>> Is there an easy way to tell if something is being filtered?
>
> Enabling "debug 64" and checking the log seems easy enough.

I was afraid of that

The only place I see buffer_and_filter_content begin set is
 buffer_and_filter_content = content_requires_filtering(csp);
so I just added
 if ( buffer_and_filter_content ) {
    log_error(LOG_LEVEL_INFO, "buffer_and_filter_content set to %d",
buffer_and_filter_content);
 }
right after to make it easier to match up the Request: line with the
Info: buffer_and_filter_content set line

>> > Obviously they could be changed but this is
>> > unrelated to this commit.
>> >
>> > For reasonable buffer sizes I expect the performance impact
>> > to be minimal so I don't consider this a priority.
>>
>> just curious - what do you consider a reasonable buffer size?
>> I've had it set to 46720 for I don't know how long & haven't noticed
>> any problems.
>
> It depends on the connection. I don't see the point of using
> a buffer that is so large that it never gets even close to
> being full.
>
> I'm usually using Privoxy with Tor. Given this read length distribution:
>
> [...]
>             8900 |                                         47
>             9000 |@                                        155
>             9100 |                                         9
>             9200 |                                         2
>             9300 |                                         3
>             9400 |                                         17
>             9500 |                                         7
>             9600 |                                         7
>             9700 |                                         2
>             9800 |                                         15
>             9900 |                                         12
>            10000 |@@@@@@@@                                 2321
>            20000 |                                         16
>            30000 |                                         2
>            40000 |                                         1
>            50000 |                                         1
>            60000 |                                         0
>
> it seems reasonable to me to use a receive-buffer-size below
> 20k. Using a bit more is unlikely to hurt but isn't likely to
> noticeable improve things either.

Here's what I got today using a buffer size of 186880+1
    .. snip ..
  1000 :  18711 reads     4.37%
  2000 :  47102 reads    10.99%
  3000 :    505 reads    11.06%
  4000 :  26782 reads    14.83%
  5000 :  41452 reads    20.66%
  6000 :    252 reads    20.70%
  7000 :  26090 reads    24.37%
  8000 : 151613 reads    45.70%
  9000 :    177 reads    45.72%
 10000 : 214951 reads    75.96%
 20000 : 126742 reads    93.80%
 30000 :  23898 reads    97.16%
 40000 :   8481 reads    98.35%
 50000 :   3428 reads    98.83%
 60000 :   2087 reads    99.13%
 70000 :   1347 reads    99.32%
 80000 :    698 reads    99.41%
 90000 :    458 reads    99.48%
100000 :   1687 reads    99.72%
186880 :   2016 reads   100.00%

which is skewed towards the high end because I was streaming music & some video

>> > I don't know. I made those tests in a bhyve vm.
>> > While it has an emulated 10GB/s interface the tests
>> > used the loopback interface and not the external network
>> > or the Internet.
>>
>> Is it possible to have a dropped packet when you're using the loopback
>> interface?
>
> The system wasn't intentionally configured to drop packets
> and it seems unlikely that the tests itself resulted in
> package loss.
>
>> Doing your testing on a vm seems like a good way to figure out where
>> all the bottlenecks are, but it seems like you'd be missing most if
>> not all of the nasty things that happen on the internet like dropped
>> packets.
>
> I'm aware of that.
>
> I'm also intentionally using test scenarios that are likely to show
> the impact of the patch set I'm testing.

OK - it kind of sounded like you might be doing all your testing over
the loopback interface.

Thanks,
Lee


More information about the Privoxy-devel mailing list