Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 158075

Article: 158075
Subject: Re: Picking the best synthesis result before implementation
From: GaborSzakacs <gabor@alacron.com>
Date: Fri, 31 Jul 2015 11:29:01 -0400
Links: << >>  << T >>  << A >>
Brian Drummond wrote:
> On Thu, 30 Jul 2015 20:23:12 -0700, James07 wrote:
> 
>> Out of curiosity, I wrote a script to explore with different options in
>> the Vivado software (2014.4), especially on the synthesis options under
>> SYNTH_DESIGN, like FSM_extraction, MAX_BRAM etc. The script stops after
>> synthesis, just enough to get the timing estimate. I explore everything
>> except the directive because it seems like you use the directive, you
>> cannot manually set the options
>>
>> My goal is to see if it will give me a better result before I move on to
>> implementation. However, out of the 50 different results I see that a
>> lot of the estimated worst slacks and timing scores are the same. About
>> 40% of the results report the same values. I ran on 3 sample designs and
>> it gave me the same thing.
>>
>> So my question is, is there a way to differentiate what is a better
>> synthesis result? What should I look at in the report?
> 
> Did you also differentiate by resource usage? Same timing result and 
> lower usage would count as better, but sometimes different settings will, 
> after optimisation, yield the same result.
> 
> It's also worth trying ISE, with both the old and new VHDL parser (though 
> switching parsers is more likely to dance round bugs than improve synth 
> results). 

You can't use the old parser on 6 or 7 series parts.  It's OK to
use the newer parser for older parts, but the use_new_parser
switch is ignored for 6 or 7 series.  So in effect there's only
one XST implementation to try if you are using 7-series parts.
ISE does allow you to use SmartXplorer to investigate different
canned sets of options, though.  I usually find that you need to
individually tune the settings to get the best results.

> 
> While Vivado is relatively new, ISE has been heavily tuned across the 
> years and I wouldn't be surprised to find it sometimes gives better 
> results.
> 
> If you try it, I'd be interested to see your conclusions.
> 
> -- Brian

-- 
Gabor

Article: 158076
Subject: Re: Picking the best synthesis result before implementation
From: Sharad <sharad.snh@gmail.com>
Date: Sat, 1 Aug 2015 21:41:37 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Friday, July 31, 2015 at 11:23:18 AM UTC+8, James07 wrote:
> Out of curiosity, I wrote a script to explore with different options in t=
he Vivado software (2014.4), especially on the synthesis options under SYNT=
H_DESIGN, like FSM_extraction, MAX_BRAM etc. The script stops after synthes=
is, just enough to get the timing estimate. I explore everything except the=
 directive because it seems like you use the directive, you cannot manually=
 set the options
>=20
> My goal is to see if it will give me a better result before I move on to =
implementation. However, out of the 50 different results I see that a lot o=
f the estimated worst slacks and timing scores are the same. About 40% of t=
he results report the same values. I ran on 3 sample designs and it gave me=
 the same thing.
>=20
> So my question is, is there a way to differentiate what is a better synth=
esis result? What should I look at in the report?

1. Lower area utilization with similar timing results would be considered g=
ood. However, it will be even better to take a look at the individual utili=
zation of resources like LUTs, BRAM and DSP blocks. You may want to choose =
a synthesis result that allows you to add more features to your design in t=
he future. Such features may require BRAM or DSP in different proportions. =
So, it might be good to see the synthesis results, especially area, with re=
spect to expected feature changes in the future.

2. Power is another factor that you may consider when deciding which is a b=
etter synthesis result. If you have two synthesis results, where one uses a=
 lot of LUTs while the other uses a lot of DSP blocks, it is very likely th=
at the one with DSP blocks will dissipate lesser dynamic power. This is bec=
ause DSP blocks are optimized hard IP blocks on the device.

3. Have you analysed your results with respect to pin assignment? If pin as=
signment is critical to how your FPGA will be placed on the board, you may =
want to see the synthesis results with that perspective. Under no pin assig=
nment constraint, the tool automatically assigns pins to the design. Pin as=
signment constraint is not applied by the tool during "synthesis-only" run.=
 But the default pin assignment and corresponding synthesis results can be =
analyzed with respect to your planned pin assignment.

4. If a large percentage of synthesis results give similar results, it also=
 means that the tool is not finding many opportunities to perform various o=
ptimizations. It could be because your design is already very well architec=
ted or it could be that it needs to be re-architected if you are aiming for=
 certain specific performance measures. As the designer, you know better wh=
ich is the case with the design.

Article: 158077
Subject: Re: Picking the best synthesis result before implementation
From: kt8128@gmail.com
Date: Sun, 2 Aug 2015 03:28:07 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Friday, July 31, 2015 at 6:41:47 PM UTC+8, Brian Drummond wrote:
> Did you also differentiate by resource usage? Same timing result and=20
> lower usage would count as better, but sometimes different settings will,=
=20
> after optimisation, yield the same result.
>=20
As far as I can tell, the resource usage is almost the same and similar. I =
am taking another look. On the first glance, for the 40% I mentioned, they =
look almost the same, which is also partly why I can't tell these clones tr=
oopers apart.

> It's also worth trying ISE, with both the old and new VHDL parser (though=
=20
> switching parsers is more likely to dance round bugs than improve synth=
=20
> results).=20
>=20
> While Vivado is relatively new, ISE has been heavily tuned across the=20
> years and I wouldn't be surprised to find it sometimes gives better=20
> results.
>=20
> If you try it, I'd be interested to see your conclusions.
>=20
> -- Brian

Yes, I am intending to try it on ISE. The latest (and last!) ISE version 14=
.7 works on one of the older V7 devices. I will try that and see what is th=
e result, although I am not so sure if it gives estimated timing scores aft=
er synthesis. Need to look into it.

Article: 158078
Subject: Re: Picking the best synthesis result before implementation
From: kt8128@gmail.com
Date: Sun, 2 Aug 2015 03:56:49 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Sunday, August 2, 2015 at 12:41:43 PM UTC+8, Sharad wrote:
> 1. Lower area utilization with similar timing results would be considered=
 good. However, it will be even better to take a look at the individual uti=
lization of resources like LUTs, BRAM and DSP blocks. You may want to choos=
e a synthesis result that allows you to add more features to your design in=
 the future. Such features may require BRAM or DSP in different proportions=
. So, it might be good to see the synthesis results, especially area, with =
respect to expected feature changes in the future.
>=20

> 2. Power is another factor that you may consider when deciding which is a=
 better synthesis result. If you have two synthesis results, where one uses=
 a lot of LUTs while the other uses a lot of DSP blocks, it is very likely =
that the one with DSP blocks will dissipate lesser dynamic power. This is b=
ecause DSP blocks are optimized hard IP blocks on the device.
>=20
> 3. Have you analysed your results with respect to pin assignment? If pin =
assignment is critical to how your FPGA will be placed on the board, you ma=
y want to see the synthesis results with that perspective. Under no pin ass=
ignment constraint, the tool automatically assigns pins to the design. Pin =
assignment constraint is not applied by the tool during "synthesis-only" ru=
n. But the default pin assignment and corresponding synthesis results can b=
e analyzed with respect to your planned pin assignment.
>=20

This is a good point. No, I haven't got to that step. Based on what I under=
stand from the Vivado flow, that happens during place_design phase. Hmm... =
so perhaps the next step is to take that 40% results and continue running t=
hem till end of place_design, and check out the timing estimates. I guess t=
he later it is in the flow, the more accurate it becomes.

> 4. If a large percentage of synthesis results give similar results, it al=
so means that the tool is not finding many opportunities to perform various=
 optimizations. It could be because your design is already very well archit=
ected or it could be that it needs to be re-architected if you are aiming f=
or certain specific performance measures. As the designer, you know better =
which is the case with the design.

I wouldn't say it is already well-architected. Sometimes my hands are tied =
and I can't change the code. So I am exploring ways to work the tools to my=
 advantage. Thanks for the helpful comments.=20

Article: 158079
Subject: Re: Image Compression in an FPGA
From: "Tomas D." <mailsoc@gmial.com>
Date: Sun, 2 Aug 2015 16:28:14 +0100
Links: << >>  << T >>  << A >>

> JPEG is not very good at reproducing line art with high contrast
> ratio.  For example save a screen capture in TIFF, PNG,
> and JPEG and you'll see that JPEG gives the most blurring and
> artefacts unless you set it for very little compression.  PNG
> is quite good at achieving compression on computer-generated
> images with text or line drawings.

The reason I've offered JPEG is because it's available on OpenCores and 
tested to work fine. I am not sure if there's PNG encoder anywhere available 
for free... And I wonder what's the logic utilization difference.



Article: 158080
Subject: Re: Picking the best synthesis result before implementation
From: rickman <gnuarm@gmail.com>
Date: Sun, 02 Aug 2015 12:59:07 -0400
Links: << >>  << T >>  << A >>
On 8/2/2015 6:56 AM, kt8128@gmail.com wrote:
> On Sunday, August 2, 2015 at 12:41:43 PM UTC+8, Sharad wrote:
>> 1. Lower area utilization with similar timing results would be
>> considered good. However, it will be even better to take a look at
>> the individual utilization of resources like LUTs, BRAM and DSP
>> blocks. You may want to choose a synthesis result that allows you
>> to add more features to your design in the future. Such features
>> may require BRAM or DSP in different proportions.. So, it might be
>> good to see the synthesis results, especially area, with respect to
>> expected feature changes in the future.
>>
>
>> 2. Power is another factor that you may consider when deciding
>> which is a better synthesis result. If you have two synthesis
>> results, where one uses a lot of LUTs while the other uses a lot of
>> DSP blocks, it is very likely that the one with DSP blocks will
>> dissipate lesser dynamic power. This is because DSP blocks are
>> optimized hard IP blocks on the device.
>>
>> 3. Have you analysed your results with respect to pin assignment?
>> If pin assignment is critical to how your FPGA will be placed on
>> the board, you may want to see the synthesis results with that
>> perspective. Under no pin assignment constraint, the tool
>> automatically assigns pins to the design. Pin assignment constraint
>> is not applied by the tool during "synthesis-only" run. But the
>> default pin assignment and corresponding synthesis results can be
>> analyzed with respect to your planned pin assignment.
>>
>
> This is a good point. No, I haven't got to that step. Based on what I
> understand from the Vivado flow, that happens during place_design
> phase. Hmm... so perhaps the next step is to take that 40% results
> and continue running them till end of place_design, and check out the
> timing estimates. I guess the later it is in the flow, the more
> accurate it becomes..

My experience is the timing numbers from synthesis are totally bogus. 
You need to do a place and route if you want to compare timing data. 
Even then you can get noticeable improvements in timing by running more 
than one routes with different settings.  So the connection back to your 
synthesis parameters is hard to explore without a lot of work.  Using 
one pass on place and route may show synthesis option A to be the best 
by 4% but when you explore the routing options you may find synthesis 
option B is now 7% better.

I think this problem space is very chaotic with small changes in initial 
conditions giving large changes in results.

I worked on a project once where the timing analysis tools were broken 
saying the project met timing when it didn't.  The design would fail on 
the bench until we hit it with cold spray.  I tried using manual 
placement to improve the routing, but everything I did to improve this 
feature made some other feature worse or even unroutable.

We automated a process of tweaking the initial seed parameter to get 
multiple runs each night.  The next day we would test those runs on the 
bench with a chip warmer.  Eventually we found a good design and shipped 
it.  Ever since then I have treated the entire compile-place-route 
process like an exploration of the Mandelbrot set.


>> 4. If a large percentage of synthesis results give similar results,
>> it also means that the tool is not finding many opportunities to
>> perform various optimizations. It could be because your design is
>> already very well architected or it could be that it needs to be
>> re-architected if you are aiming for certain specific performance
>> measures. As the designer, you know better which is the case with
>> the design.
>
> I wouldn't say it is already well-architected. Sometimes my hands are
> tied and I can't change the code. So I am exploring ways to work the
> tools to my advantage. Thanks for the helpful comments.

Is there a particular problem you are having with the results?  Is the 
design larger than you need?  If you haven't done a place-route I guess 
it can't be that it is too slow.  If you are just trying to "optimize" I 
suggest you don't bother and just move on to the place and route.  See 
what sorts of results you get before you spend time trying to optimize a 
design that may be perfectly good.

There is a rule about optimization.  It says *don't* unless you have to. 
  Optimizing for "this" can make it harder to get "that" working or at 
very least result in spending a lot of time on something that isn't 
important in the end.

-- 

Rick

Article: 158081
Subject: Re: Image Compression in an FPGA
From: rickman <gnuarm@gmail.com>
Date: Sun, 02 Aug 2015 13:01:47 -0400
Links: << >>  << T >>  << A >>
On 8/2/2015 11:28 AM, Tomas D. wrote:
>> JPEG is not very good at reproducing line art with high contrast
>> ratio.  For example save a screen capture in TIFF, PNG,
>> and JPEG and you'll see that JPEG gives the most blurring and
>> artefacts unless you set it for very little compression.  PNG
>> is quite good at achieving compression on computer-generated
>> images with text or line drawings.
>
> The reason I've offered JPEG is because it's available on OpenCores and
> tested to work fine. I am not sure if there's PNG encoder anywhere available
> for free... And I wonder what's the logic utilization difference.

The guy is using SVG which is not only much more highly compressed 
compared to JPG, it is *much* easier to produce.  I don't think he has 
any trouble finding someone to write the code.  Even if a core is stated 
to be "working", you need to do your due diligence and verify any core 
you use.  That is often as much work as writing the core.

-- 

Rick

Article: 158082
Subject: Re: Picking the best synthesis result before implementation
From: kt8128@gmail.com
Date: Sun, 2 Aug 2015 19:14:29 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Monday, August 3, 2015 at 12:59:14 AM UTC+8, rickman wrote:
>=20
> My experience is the timing numbers from synthesis are totally bogus.=20
> You need to do a place and route if you want to compare timing data.=20
> Even then you can get noticeable improvements in timing by running more=
=20
> than one routes with different settings.  So the connection back to your=
=20
> synthesis parameters is hard to explore without a lot of work.  Using=20
> one pass on place and route may show synthesis option A to be the best=20
> by 4% but when you explore the routing options you may find synthesis=20
> option B is now 7% better.
>=20
> I think this problem space is very chaotic with small changes in initial=
=20
> conditions giving large changes in results.

Yes, I understand that and have seen that myself. Part of it is why I am st=
ruggling to qualify what is a "good" synthesize result, with meeting timing=
 as the end goal. For example, let say "A" synthesis set has 10% of meeting=
 timing with various P&R settings. "B" synthesis set has only 5%. *Somethin=
g* has got to be that difference.=20

>=20
> I worked on a project once where the timing analysis tools were broken=20
> saying the project met timing when it didn't.  The design would fail on=
=20
> the bench until we hit it with cold spray. =20

This is hilarious!=20

=20
> Is there a particular problem you are having with the results?  Is the=20
> design larger than you need?  If you haven't done a place-route I guess=
=20
> it can't be that it is too slow.  If you are just trying to "optimize" I=
=20
> suggest you don't bother and just move on to the place and route.  See=20
> what sorts of results you get before you spend time trying to optimize a=
=20
> design that may be perfectly good.

I have done place-route a couple of times and it takes around 8 hours. (1 h=
our for synthesis) I tried different directives as well and it gave me a va=
riety of results.=20

I understand how I am approaching this may not be practical in the grand sc=
heme of things. BUT I got curious when I read in the V design methodology t=
hat if you get -300ps after post-synthesis, you can definitely meet timing.=
 I also vaguely remember an illustration showing synthesis has a 10x effect=
 on end results. I wonder how and who did these estimations.=20

>=20
> There is a rule about optimization.  It says *don't* unless you have to.=
=20
>   Optimizing for "this" can make it harder to get "that" working or at=20
> very least result in spending a lot of time on something that isn't=20
> important in the end.

>=20
> --=20
>=20
> Rick


Article: 158083
Subject: Re: Picking the best synthesis result before implementation
From: rickman <gnuarm@gmail.com>
Date: Sun, 02 Aug 2015 23:26:01 -0400
Links: << >>  << T >>  << A >>
On 8/2/2015 10:14 PM, kt8128@gmail.com wrote:
> On Monday, August 3, 2015 at 12:59:14 AM UTC+8, rickman wrote:
>>
>> My experience is the timing numbers from synthesis are totally
>> bogus. You need to do a place and route if you want to compare
>> timing data. Even then you can get noticeable improvements in
>> timing by running more than one routes with different settings.  So
>> the connection back to your synthesis parameters is hard to explore
>> without a lot of work.  Using one pass on place and route may show
>> synthesis option A to be the best by 4% but when you explore the
>> routing options you may find synthesis option B is now 7% better.
>>
>> I think this problem space is very chaotic with small changes in
>> initial conditions giving large changes in results.
>
> Yes, I understand that and have seen that myself. Part of it is why I
> am struggling to qualify what is a "good" synthesize result, with
> meeting timing as the end goal. For example, let say "A" synthesis
> set has 10% of meeting timing with various P&R settings. "B"
> synthesis set has only 5%. *Something* has got to be that
> difference.

I think there is little about your synthesis result that can be easily 
measured in a meaningful way to predict the timing result of routing. 
That is what I mean about it being "chaotic".  It is much like 
predicting the weather more than a week out.  You can see general 
trends, but hard to predict any details with any accuracy.  So the 
weather man just doesn't try.

In FPGAs the synthesis result has no insight into routing so they just 
measure the logic delays and then add a standard factor for routing. 
Routing can be impacted by the logic partitioning in ways that are hard 
to predict.  I'd be willing to speculate it is a bit like the way they 
proved in general the task of predicting the run time of a computer 
algorithm will take as much run time as the algorithm itself.  So the 
best way to estimate run time is to run the task.  Best way to estimate 
routing result is to run routing.  Routing is often half the total path 
time, so without good info on that there is no decent guess to timing.


>> I worked on a project once where the timing analysis tools were
>> broken saying the project met timing when it didn't.  The design
>> would fail on the bench until we hit it with cold spray.
>
> This is hilarious!

this was also some time ago using the Altera Max+II tools when Quartus 
was the "current" tool.  Trouble was Altera didn't support the older 
devices with the new Quartus tool.  We were adding features to an 
existing product so we didn't have the luxury of using the new tools 
with new parts.  Eventually they relented and did support the older 
parts with Quartus, but it was well after our project was done.  I 
expect we weren't the only customer to want support for older products.


>> Is there a particular problem you are having with the results?  Is
>> the design larger than you need?  If you haven't done a place-route
>> I guess it can't be that it is too slow.  If you are just trying to
>> "optimize" I suggest you don't bother and just move on to the place
>> and route.  See what sorts of results you get before you spend time
>> trying to optimize a design that may be perfectly good.
>
> I have done place-route a couple of times and it takes around 8
> hours. (1 hour for synthesis) I tried different directives as well
> and it gave me a variety of results.

Must be a large project.  The project we were on would load up multiple 
runs on many CPUs overnight.  This would give us many trials to sort 
through the next day.  Best if this is done on a design that has passed 
all logic checks and even runs in the board with a reduced clock or cold 
spray.


> I understand how I am approaching this may not be practical in the
> grand scheme of things. BUT I got curious when I read in the V design
> methodology that if you get -300ps after post-synthesis, you can
> definitely meet timing. I also vaguely remember an illustration
> showing synthesis has a 10x effect on end results. I wonder how and
> who did these estimations.

I'm not sure what a "10x effect" means.  But sure, a bad synthesis will 
give you a bad timing result.  On large projects it is hard to deal with 
timing issues sometimes.  You might try breaking the project down to 
smaller pieces to see if they will meet timing separately.  Perhaps you 
will find a given module that is a problem and you can focus on code 
changes to improve the synthesis?  I don't think you can do tons just 
using tweaks to tool parameters.

Are your modules partitioned in a way that lets each one be checked for 
timing without lots of paths that cross?

-- 

Rick

Article: 158084
Subject: Re: Picking the best synthesis result before implementation
From: Brian Drummond <brian@shapes.demon.co.uk>
Date: Mon, 3 Aug 2015 09:31:24 +0000 (UTC)
Links: << >>  << T >>  << A >>
On Sun, 02 Aug 2015 03:28:07 -0700, kt8128 wrote:

> On Friday, July 31, 2015 at 6:41:47 PM UTC+8, Brian Drummond wrote:

>> While Vivado is relatively new, ISE has been heavily tuned across the
>> years and I wouldn't be surprised to find it sometimes gives better
>> results.

> Yes, I am intending to try it on ISE. The latest (and last!) ISE version
> 14.7 works on one of the older V7 devices. I will try that and see what
> is the result, although I am not so sure if it gives estimated timing
> scores after synthesis. Need to look into it.

It does. If you can't see what you want in the summary, read the .syr 
(Synth report) file.

-- Brian


Article: 158085
Subject: Re: Finally! A Completely Open Complete FPGA Toolchain
From: Aleksandar Kuktin <akuktin@gmail.com>
Date: Tue, 4 Aug 2015 23:05:00 +0000 (UTC)
Links: << >>  << T >>  << A >>
On Wed, 29 Jul 2015 12:32:08 -0400, rickman wrote:

> On 7/29/2015 5:50 AM, Jan Coombs <Jan-54 wrote:

>> a) iCE40 - see rest of thread. (Note suggestion to buy 3 iCE40 sticks
>> if using IceStorm, one or more parts might die during (mal?)practice)
> 
> I thoought I had read the thread.  What did I miss?  All I've seen is
> that there are alternative tools available that may or may not be as
> good as the vendor's tools.  Other than not having to fight the
> licensing, what improvement do the alternative tools provide?

Hackability. If you have an itch, you can scratch it yourself with FOSS 
tools. If you discover a bug, you can fix it yourself. If you want to 
repurpose, optimize or otherwise change the tool, you can do it with FOSS.

Article: 158086
Subject: Re: Picking the best synthesis result before implementation
From: Aleksandar Kuktin <akuktin@gmail.com>
Date: Tue, 4 Aug 2015 23:08:09 +0000 (UTC)
Links: << >>  << T >>  << A >>
On Thu, 30 Jul 2015 20:23:12 -0700, James07 wrote:

> Out of curiosity, I wrote a script to explore with different options in
> the Vivado software (2014.4), especially on the synthesis options under
> SYNTH_DESIGN, like FSM_extraction, MAX_BRAM etc. The script stops after
> synthesis, just enough to get the timing estimate. I explore everything
> except the directive because it seems like you use the directive, you
> cannot manually set the options
> 
> My goal is to see if it will give me a better result before I move on to
> implementation. However, out of the 50 different results I see that a
> lot of the estimated worst slacks and timing scores are the same. About
> 40% of the results report the same values. I ran on 3 sample designs and
> it gave me the same thing.
> 
> So my question is, is there a way to differentiate what is a better
> synthesis result? What should I look at in the report?

It is possible that you are giving tools too simple test-cases. Try 
giving them something complicated - like big designs with VERY much 
interconnectivity that also need to be fast - and see how they fare then.

Article: 158087
Subject: Re: Finally! A Completely Open Complete FPGA Toolchain
From: rickman <gnuarm@gmail.com>
Date: Tue, 04 Aug 2015 19:46:38 -0400
Links: << >>  << T >>  << A >>
On 8/4/2015 7:05 PM, Aleksandar Kuktin wrote:
> On Wed, 29 Jul 2015 12:32:08 -0400, rickman wrote:
>
>> On 7/29/2015 5:50 AM, Jan Coombs <Jan-54 wrote:
>
>>> a) iCE40 - see rest of thread. (Note suggestion to buy 3 iCE40 sticks
>>> if using IceStorm, one or more parts might die during (mal?)practice)
>>
>> I thoought I had read the thread.  What did I miss?  All I've seen is
>> that there are alternative tools available that may or may not be as
>> good as the vendor's tools.  Other than not having to fight the
>> licensing, what improvement do the alternative tools provide?
>
> Hackability. If you have an itch, you can scratch it yourself with FOSS
> tools. If you discover a bug, you can fix it yourself. If you want to
> repurpose, optimize or otherwise change the tool, you can do it with FOSS.

That's great.  But only important to a small few.  I use tools to get 
work done.  I have zero interest in digging into the code of the tools 
without a real need.  I have not found any bugs in the vendor's tools 
that would make me want to spend weeks learning how they work in the, 
most likely, vain hope that I could fix them.

I think FOSS is great and I am very happy to see that finally happen in 
an end to end toolchain for an FPGA.  But it is statements like this 
that I don't understand, "An open-source toolchain for the IGLOO parts 
could be an unusually powerful tool in the hands of a creative 
designer", or this "Because open source tools allow exploration of 
techniques which are restricted using regular tools."

Not trying to give anyone grief.  I'd just like to understand what 
people expect to happen with FOSS that isn't happening with the vendor's 
closed, but free tools.

-- 

Rick

Article: 158088
Subject: Re: Finally! A Completely Open Complete FPGA Toolchain
From: Philipp Klaus Krause <pkk@spth.de>
Date: Wed, 05 Aug 2015 23:30:53 +0200
Links: << >>  << T >>  << A >>
On 05.08.2015 01:46, rickman wrote:
> On 8/4/2015 7:05 PM, Aleksandar Kuktin wrote:
>>
>> Hackability. If you have an itch, you can scratch it yourself with FOSS
>> tools. If you discover a bug, you can fix it yourself. If you want to
>> repurpose, optimize or otherwise change the tool, you can do it with
>> FOSS.
> 
> That's great.  But only important to a small few.  I use tools to get
> work done.  I have zero interest in digging into the code of the tools
> without a real need.  I have not found any bugs in the vendor's tools
> that would make me want to spend weeks learning how they work in the,
> most likely, vain hope that I could fix them.
> 
> I think FOSS is great and I am very happy to see that finally happen in
> an end to end toolchain for an FPGA.  But it is statements like this
> that I don't understand, "An open-source toolchain for the IGLOO parts
> could be an unusually powerful tool in the hands of a creative
> designer", or this "Because open source tools allow exploration of
> techniques which are restricted using regular tools."
> 
> Not trying to give anyone grief.  I'd just like to understand what
> people expect to happen with FOSS that isn't happening with the vendor's
> closed, but free tools.
> 

Same thing that's happening with compilers all the time.

Just a personal example:
A log time ago I decided to make a few games for the ColecoVision
console. The ColecoVision uses a Z80, and at the tie all the other
homebrew game developers used an old DOS eval version of IAR within
Windows. I used the free sdcc compiler. Not always being happy with the
generated code I started improving it, ad later became the maintainer of
the Z80 port.
A few years ago I joined the group for theory of computer science at the
univesity in Frankfurt as a PhD student. I found that I could apply
graph structure theory in compiler construction. This resulted in some
quite unusual optimizations in SDCC currently not found in any other
compiler.

Philipp


Article: 158089
Subject: Re: Finally! A Completely Open Complete FPGA Toolchain
From: rickman <gnuarm@gmail.com>
Date: Wed, 05 Aug 2015 18:50:11 -0400
Links: << >>  << T >>  << A >>
On 8/5/2015 5:30 PM, Philipp Klaus Krause wrote:
> On 05.08.2015 01:46, rickman wrote:
>> On 8/4/2015 7:05 PM, Aleksandar Kuktin wrote:
>>>
>>> Hackability. If you have an itch, you can scratch it yourself with FOSS
>>> tools. If you discover a bug, you can fix it yourself. If you want to
>>> repurpose, optimize or otherwise change the tool, you can do it with
>>> FOSS.
>>
>> That's great.  But only important to a small few.  I use tools to get
>> work done.  I have zero interest in digging into the code of the tools
>> without a real need.  I have not found any bugs in the vendor's tools
>> that would make me want to spend weeks learning how they work in the,
>> most likely, vain hope that I could fix them.
>>
>> I think FOSS is great and I am very happy to see that finally happen in
>> an end to end toolchain for an FPGA.  But it is statements like this
>> that I don't understand, "An open-source toolchain for the IGLOO parts
>> could be an unusually powerful tool in the hands of a creative
>> designer", or this "Because open source tools allow exploration of
>> techniques which are restricted using regular tools."
>>
>> Not trying to give anyone grief.  I'd just like to understand what
>> people expect to happen with FOSS that isn't happening with the vendor's
>> closed, but free tools.
>>
>
> Same thing that's happening with compilers all the time.
>
> Just a personal example:
> A log time ago I decided to make a few games for the ColecoVision
> console. The ColecoVision uses a Z80, and at the tie all the other
> homebrew game developers used an old DOS eval version of IAR within
> Windows. I used the free sdcc compiler. Not always being happy with the
> generated code I started improving it, ad later became the maintainer of
> the Z80 port.
> A few years ago I joined the group for theory of computer science at the
> univesity in Frankfurt as a PhD student. I found that I could apply
> graph structure theory in compiler construction. This resulted in some
> quite unusual optimizations in SDCC currently not found in any other
> compiler.

I think this is the point some are making.  The examples of the utility 
of FOSS often point to more obscure examples which impact a relatively 
small number of users.  I appreciate the fact that being able to tinker 
with the tools can be very useful to a few.  But those few must have the 
need as well as the ability.  With hardware development both are less 
likely to happen.

Maybe I just don't have enough imagination.

-- 

Rick

Article: 158090
Subject: Re: Finally! A Completely Open Complete FPGA Toolchain
From: thomas.entner99@gmail.com
Date: Wed, 5 Aug 2015 15:52:52 -0700 (PDT)
Links: << >>  << T >>  << A >>
Am Mittwoch, 5. August 2015 23:30:58 UTC+2 schrieb Philipp Klaus Krause:
> On 05.08.2015 01:46, rickman wrote:
> > On 8/4/2015 7:05 PM, Aleksandar Kuktin wrote:
> >>
> >> Hackability. If you have an itch, you can scratch it yourself with FOS=
S
> >> tools. If you discover a bug, you can fix it yourself. If you want to
> >> repurpose, optimize or otherwise change the tool, you can do it with
> >> FOSS.
> >=20
> > That's great.  But only important to a small few.  I use tools to get
> > work done.  I have zero interest in digging into the code of the tools
> > without a real need.  I have not found any bugs in the vendor's tools
> > that would make me want to spend weeks learning how they work in the,
> > most likely, vain hope that I could fix them.
> >=20
> > I think FOSS is great and I am very happy to see that finally happen in
> > an end to end toolchain for an FPGA.  But it is statements like this
> > that I don't understand, "An open-source toolchain for the IGLOO parts
> > could be an unusually powerful tool in the hands of a creative
> > designer", or this "Because open source tools allow exploration of
> > techniques which are restricted using regular tools."
> >=20
> > Not trying to give anyone grief.  I'd just like to understand what
> > people expect to happen with FOSS that isn't happening with the vendor'=
s
> > closed, but free tools.
> >=20
>=20
> Same thing that's happening with compilers all the time.
>=20
> Just a personal example:
> A log time ago I decided to make a few games for the ColecoVision
> console. The ColecoVision uses a Z80, and at the tie all the other
> homebrew game developers used an old DOS eval version of IAR within
> Windows. I used the free sdcc compiler. Not always being happy with the
> generated code I started improving it, ad later became the maintainer of
> the Z80 port.
> A few years ago I joined the group for theory of computer science at the
> univesity in Frankfurt as a PhD student. I found that I could apply
> graph structure theory in compiler construction. This resulted in some
> quite unusual optimizations in SDCC currently not found in any other
> compiler.
>=20
> Philipp

I think C-compilers are the piece of software were open source works best, =
as there is a big user base, many of them are skilled programmers. So there=
 is both the skill and motivation to improve the product.

For software not targeted to programmers, the user base must be very large =
to have sufficient contributors, IMHO.

For FPGA design, the user base is much smaller than for a C compiler. How m=
uch of them would really use the open source alternatives when there are ve=
ry advanced free vendor tools? And how much of them are really skilled soft=
ware gurus? And have enough spare time? Of course you would find some stude=
nts which are contributing (e.g. for their thesis), but I doubt that it wil=
l be enough to get a competitve product and to maintain it. New devices sho=
uld be supported with short delay, otherwise the tool would not be very use=
ful.

Of course it could be a good playfield for students, to have a reference fo=
r future "real" jobs in the EDA field, but then the tool would not aim to b=
e really used by the average FPGA designer...

BTW: Thanks for your contribution to SDCC, we have ported it to our ERIC5 s=
oft-core many years ago. We also found quite some bugs at that time...

Thomas


Article: 158091
Subject: Re: Finally! A Completely Open Complete FPGA Toolchain
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Wed, 5 Aug 2015 23:46:20 +0000 (UTC)
Links: << >>  << T >>  << A >>
thomas.entner99@gmail.com wrote:
> Am Mittwoch, 5. August 2015 23:30:58 UTC+2 schrieb Philipp Klaus Krause:

(snip on open source hardware design tools)

>> Same thing that's happening with compilers all the time.
 
>> Just a personal example:
>> A log time ago I decided to make a few games for the ColecoVision
>> console. The ColecoVision uses a Z80, and at the tie all the other
>> homebrew game developers used an old DOS eval version of IAR within
>> Windows. I used the free sdcc compiler. Not always being happy with the
>> generated code I started improving it, ad later became the maintainer of
>> the Z80 port.

(snip)
> I think C-compilers are the piece of software were open source 
> works best, as there is a big user base, many of them are 
> skilled programmers. So there is both the skill and motivation 
> to improve the product.
 
> For software not targeted to programmers, the user base must be 
> very large to have sufficient contributors, IMHO.

I wonder what one would have said before gcc?

It used to be that unix always came with a C compiler, as one
was required to sysgen a kernel.  At one point, Sun changed to
a bundled minimal C compiler, and charge for a better one.
That opened a door for gcc that might otherwise not have been there.
 
> For FPGA design, the user base is much smaller than for a 
> C compiler. How much of them would really use the open source 
> alternatives when there are very advanced free vendor tools? 
> And how much of them are really skilled software gurus? 
> And have enough spare time? Of course you would find some 
> students which are contributing (e.g. for their thesis), 
> but I doubt that it will be enough to get a competitve 
> product and to maintain it. New devices should be supported 
> with short delay, otherwise the tool would not be very useful.

Again, consider before gcc. I suspect that there are many times
more C programmers now than in the 1980s, yet there were enough
to cause gcc to exist.
 
> Of course it could be a good playfield for students, 
> to have a reference for future "real" jobs in the EDA field, 
> but then the tool would not aim to be really used by the 
> average FPGA designer...

People use gcc because it works well, and it works well because
people use it, and want it to work well. 

But one reason we have free HDL tools (from Xilinx and Altera)
now is related to the competition between them. With only
one FPGA company, there would be no need for competition,
tools could be expensive, and there could be a significant
advantage to FOSS tools.
 
> BTW: Thanks for your contribution to SDCC, we have ported 
> it to our ERIC5 soft-core many years ago. We also found 
> quite some bugs at that time...

-- glen

Article: 158092
Subject: Re: Finally! A Completely Open Complete FPGA Toolchain
From: rickman <gnuarm@gmail.com>
Date: Wed, 05 Aug 2015 20:16:07 -0400
Links: << >>  << T >>  << A >>
On 8/5/2015 7:46 PM, glen herrmannsfeldt wrote:
>
> But one reason we have free HDL tools (from Xilinx and Altera)
> now is related to the competition between them. With only
> one FPGA company, there would be no need for competition,
> tools could be expensive, and there could be a significant
> advantage to FOSS tools.

I'm not sure the price of the tools is so much related to the 
competition between the companies.  Hypothesizing only one FPGA company 
is not very realistic and it is certainly far down my list of concerns. 
  I expect the price of tools is much more related to promoting the 
"exploration" of the use of FPGAs.  If you even have to spend $100, that 
makes for a barrier to anyone wanting to start testing the tools.  I ran 
into this myself in jobs where I wanted to try something, but couldn't 
get one dime spent.  I can always find a little free time to spend on 
ideas, but spending money almost always goes through a review of some 
sort where they want you to show why and the "why" is what you want to 
determine.

-- 

Rick

Article: 158093
Subject: Re: Finally! A Completely Open Complete FPGA Toolchain
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Thu, 6 Aug 2015 01:13:43 +0000 (UTC)
Links: << >>  << T >>  << A >>
rickman <gnuarm@gmail.com> wrote:

(snip, I wrote)
>> But one reason we have free HDL tools (from Xilinx and Altera)
>> now is related to the competition between them. With only
>> one FPGA company, there would be no need for competition,
>> tools could be expensive, and there could be a significant
>> advantage to FOSS tools.
 
> I'm not sure the price of the tools is so much related to the 
> competition between the companies.  Hypothesizing only one FPGA company 
> is not very realistic and it is certainly far down my list of concerns. 
>  I expect the price of tools is much more related to promoting the 
> "exploration" of the use of FPGAs.  If you even have to spend $100, that 
> makes for a barrier to anyone wanting to start testing the tools.  I ran 
> into this myself in jobs where I wanted to try something, but couldn't 
> get one dime spent.  

OK, but as I understand it Altera started distributing free versions,
and Xilinx followed, presumably for competitive reasons.

As you note, the free versions allowed exploration.

If one hadn't done it first, the other might not have. 

> I can always find a little free time to spend on 
> ideas, but spending money almost always goes through a review of some 
> sort where they want you to show why and the "why" is what you want to 
> determine.

The way free market is supposed to work.
 

-- glen

Article: 158094
Subject: Re: Finally! A Completely Open Complete FPGA Toolchain
From: rickman <gnuarm@gmail.com>
Date: Wed, 05 Aug 2015 21:25:21 -0400
Links: << >>  << T >>  << A >>
On 8/5/2015 9:13 PM, glen herrmannsfeldt wrote:
> rickman <gnuarm@gmail.com> wrote:
>
> (snip, I wrote)
>>> But one reason we have free HDL tools (from Xilinx and Altera)
>>> now is related to the competition between them. With only
>>> one FPGA company, there would be no need for competition,
>>> tools could be expensive, and there could be a significant
>>> advantage to FOSS tools.
>
>> I'm not sure the price of the tools is so much related to the
>> competition between the companies.  Hypothesizing only one FPGA company
>> is not very realistic and it is certainly far down my list of concerns.
>>   I expect the price of tools is much more related to promoting the
>> "exploration" of the use of FPGAs.  If you even have to spend $100, that
>> makes for a barrier to anyone wanting to start testing the tools.  I ran
>> into this myself in jobs where I wanted to try something, but couldn't
>> get one dime spent.
>
> OK, but as I understand it Altera started distributing free versions,
> and Xilinx followed, presumably for competitive reasons.
>
> As you note, the free versions allowed exploration.
>
> If one hadn't done it first, the other might not have.

Perhaps, or it was just a matter of time.  Clearly the business model 
works and I think it was inevitable.  MCU vendors understand the 
importance and pay for tools to give away.  Why not give away $100 tool 
or even a $1000 tool if it will get you many thousands of dollars in 
sales?  It's the tool vendors who I expect have the bigger problem with 
this model.

For FPGAs the funny part is I was told a long time ago that Xilinx 
spends more on the software than they do designing the hardware.  The 
guy said they were a software company making money selling the hardware 
they support.


>> I can always find a little free time to spend on
>> ideas, but spending money almost always goes through a review of some
>> sort where they want you to show why and the "why" is what you want to
>> determine.
>
> The way free market is supposed to work.

Free market?  I'm talking about company internal management.  It is so 
easy to track every penny, but hard to track your time to the same 
degree.  Often this is penny wise, pound foolish, but that's the way it 
is.  I'm clear of that now by working for myself, but I still am happier 
to spend my time than my money, lol.

-- 

Rick

Article: 158095
Subject: Re: Finally! A Completely Open Complete FPGA Toolchain
From: John Miles <jmiles@gmail.com>
Date: Wed, 5 Aug 2015 18:48:07 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Tuesday, August 4, 2015 at 4:46:49 PM UTC-7, rickman wrote:
> Not trying to give anyone grief.  I'd just like to understand what=20
> people expect to happen with FOSS that isn't happening with the vendor's=
=20
> closed, but free tools.
>=20

Here's one example: during development, I'm targeting an FPGA that's severa=
l times larger than it needs to be, and the design has plenty of timing mar=
gin.  So why in the name of Woz do I have to cool my heels for 10 minutes e=
very time I tweak a single line of Verilog? =20

If the tools were subject to community development, they probably wouldn't =
waste enormous amounts of time generating 99.9% of the same logic as last t=
ime.  Incremental compilation and linking is ubiquitous in the software wor=
ld, but as usual the FPGA tools are decades behind.  That's the sort of imp=
rovement that could be expected with an open toolchain.

It's as if Intel had insisted on keeping the x86 ISA closed, and you couldn=
't get a C compiler or even an assembler from anyone else.  How much farthe=
r behind would we be?  Well, there's your answer.

-- john, KE5FX

Article: 158096
Subject: Re: Finally! A Completely Open Complete FPGA Toolchain
From: rickman <gnuarm@gmail.com>
Date: Wed, 05 Aug 2015 23:37:32 -0400
Links: << >>  << T >>  << A >>
On 8/5/2015 9:48 PM, John Miles wrote:
> On Tuesday, August 4, 2015 at 4:46:49 PM UTC-7, rickman wrote:
>> Not trying to give anyone grief.  I'd just like to understand what
>> people expect to happen with FOSS that isn't happening with the
>> vendor's closed, but free tools.
>>
>
> Here's one example: during development, I'm targeting an FPGA that's
> several times larger than it needs to be, and the design has plenty
> of timing margin.  So why in the name of Woz do I have to cool my
> heels for 10 minutes every time I tweak a single line of Verilog?
>
> If the tools were subject to community development, they probably
> wouldn't waste enormous amounts of time generating 99.9% of the same
> logic as last time.  Incremental compilation and linking is
> ubiquitous in the software world, but as usual the FPGA tools are
> decades behind.  That's the sort of improvement that could be
> expected with an open toolchain.
>
> It's as if Intel had insisted on keeping the x86 ISA closed, and you
> couldn't get a C compiler or even an assembler from anyone else.  How
> much farther behind would we be?  Well, there's your answer.

Don't know about Intel, but I seem to recall that Xilinx tools have 
incremental compilation.  Maybe they have dropped that.  They dropped a 
number of things over the years such as modular compilation which at one 
point a Xilinx representative swore to me was in the works for the lower 
cost Spartan chips and would be out by year end.  I think that was over 
a decade ago.

Even so, there are already FOSS HDL compilers available.  Do any of them 
offer incremental compilation?

I believe the P&R tools can work incrementally, but again, maybe that is 
not available anymore.  You used to be able to retain a portion of the 
routing and keep working on the rest over and over.  I think the idea 
was to let you have a lot of control over a small part of the design and 
then let the tool handle the rest on autopilot.

-- 

Rick

Article: 158097
Subject: Re: Finally! A Completely Open Complete FPGA Toolchain
From: John Miles <jmiles@gmail.com>
Date: Wed, 5 Aug 2015 22:54:01 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Wednesday, August 5, 2015 at 8:37:41 PM UTC-7, rickman wrote:
> I believe the P&R tools can work incrementally, but again, maybe that is=
=20
> not available anymore.  You used to be able to retain a portion of the=20
> routing and keep working on the rest over and over.  I think the idea=20
> was to let you have a lot of control over a small part of the design and=
=20
> then let the tool handle the rest on autopilot.


If there's a way to do it in the general case I haven't found it. :(  I wou=
ldn't be surprised if they could leverage *some* previous output files, but=
 there are obviously numerous phases of the synthesis process that each tak=
e a long time, and they would all have to play ball.

Mostly what I want is an option to allocate extra logic resources beyond wh=
at's needed for a given build and use it to implement incremental changes t=
o the design.  No P&R time should be necessary in about 4 out of 5 builds, =
given the way my edit-compile-test cycles tend to work.  I'm pretty sure th=
ere's no way to tell it to do that.  It would be nice to be wrong.

-- john, KE5FX

Article: 158098
Subject: Re: Finally! A Completely Open Complete FPGA Toolchain
From: rickman <gnuarm@gmail.com>
Date: Thu, 06 Aug 2015 01:57:59 -0400
Links: << >>  << T >>  << A >>
On 8/6/2015 1:54 AM, John Miles wrote:
> On Wednesday, August 5, 2015 at 8:37:41 PM UTC-7, rickman wrote:
>> I believe the P&R tools can work incrementally, but again, maybe
>> that is not available anymore.  You used to be able to retain a
>> portion of the routing and keep working on the rest over and over.
>> I think the idea was to let you have a lot of control over a small
>> part of the design and then let the tool handle the rest on
>> autopilot.
>
>
> If there's a way to do it in the general case I haven't found it. :(
> I wouldn't be surprised if they could leverage *some* previous output
> files, but there are obviously numerous phases of the synthesis
> process that each take a long time, and they would all have to play
> ball.
>
> Mostly what I want is an option to allocate extra logic resources
> beyond what's needed for a given build and use it to implement
> incremental changes to the design.  No P&R time should be necessary
> in about 4 out of 5 builds, given the way my edit-compile-test cycles
> tend to work.  I'm pretty sure there's no way to tell it to do that.
> It would be nice to be wrong.

I'm not sure what that means, "allocate extra logic resources" and use 
them with no P&R time...?   Are you using the Xilinx tools?

-- 

Rick

Article: 158099
Subject: Re: Finally! A Completely Open Complete FPGA Toolchain
From: David Brown <david.brown@hesbynett.no>
Date: Thu, 06 Aug 2015 08:01:08 +0200
Links: << >>  << T >>  << A >>
On 06/08/15 01:46, glen herrmannsfeldt wrote:

> People use gcc because it works well, and it works well because
> people use it, and want it to work well. 
> 

One key difference here is that gcc is written in C (and now some C++),
and it's main users program in C and C++.  Although compiler
design/coding is a different sort of programming than most of gcc's
users do, there is still a certain overlap and familiarity - the barrier
for going from user to contributor is smaller with gcc than it would be
for a graphics artist using GIMP or a writer using LibreOffice, or an
FPGA designer using these new tools.

The key challenge for open source projects like this is to develop a
community of people who understand the use of the tools, and understand
(and can contribute to) the coding.  Very often these are made by one or
two people - university theses are common - and the project dies away
when the original developers move on.  To be serious contenders for real
use, you need a bigger base of active developers and enthusiastic users
who help with the non-development work (documentation, examples,
testing, support on mailing lists) - MyHDL is an example of this in the
programmable logic world.





Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search