Originally posted by: RMNA
Originally posted by: Carl Isenburg
Last time I looked (a couple of years ago...), the standard configuration was 7 BBCredits on Brocade ports, so in a link over 3.5KM, it will be running on E.If you're running 4G, the frames are 1/2 KM long, so you need twice as many to fill the link...
Originally posted by: stephen2615
Originally posted by: sruby8
We found out that our ISL's are a bit longer than we expected. It turns out there are two paths in the DWDM solution and we somehow got the longer distance. Instead of around 8 km, it is about 35 km which is almost the max distance for native fc. We got some extended fabric licences and the issue has dropped a bit. I have MRTG graphing the buffer credit zero tics and on average, I see less than 10 per second. Heavy traffic gets it up to around 150 per second. I don't exactly know what a tic is. Our HDS Brocade guru says its ok to see this because "thats the way it is supposed to work". I am not really convinced as with the Cisco MDS switches, I never saw any issues because each port on the more enterprise class switch (such as 9222i and above) has hundreds of B2B credits available per port.I must admit HTnM is where I see these events and HSSM only reports what I have set the real time alerting to do.I quite often see this issue on the storage ports on the switch as well as some host ports. I am not a fan of Brocade switches.Stephen
Originally posted by: John Ellis
hrmm "that's the way it is supposed to work"
Originally posted by: Erwin van Londen
In my experience that *is* how it works. I've found that tuning long distance links take a little trial and error. For a 2Gb link, the rule of thumb is 1 credit per kilometre. But that's for a full size packet of 2112 bytes, which coincidentally is about 1km long (hard to visualise!). But FC packets come in a range of sizes, so that has to be taken into account. If your average packet size is only 1000 bytes, then you're going to need double the credits straight away to keep the pipe full.Then there's added latency for your DWDM conversion equipment that needs to be considered. So, let's say (waggles wet finger in the wind) that you've got 5 micro seconds latency per kilometre, your DWDM OTRs could add 20 micro secs at each end, possibly a lot more. If it's 20 at each end, that's the equivalent of adding another 8km. You'll need to ask your equipment provider for that information, and hope that they're open and forthcoming about it.What I've found is that if you're not happy with what you're seeing, at a time of peak load, ramp up those credits gradually. If you see an increase in throughput, you're doing the right thing. Stop when either the link will max out or throughput simply won't increase. In my shop I've found through tweaking that if I set my links to a little over 3 credits per km, then I can sustainably max out the 2Gb link, which is all I can ask for. And yes, I do still see zero buffer credit conditions.hrmm "that's the way it is supposed to work"
Retrieving data ...