Then all bids have to be put in a list sorted from highest to lowest and then the top bid win its trader automatically. Once you win a bid all the rest of the bids of that guild will be removed from the list, and the gold for those bids will be returned via mail. (something that FAILED miserably this week BTW)
We also have to have rules what happens in case of a tie when the top two bids are the same amount and on the same trader and this I have not addressed here but it could be either random, or if ZOS want to drain as much gold as possible from the economy they give the bid to the guild with the lower second bid
Then you go to the next highest remaining bid to find out who gets trader #2, and repeat the same procedure until you are out of bids or all the traders are assigned.
If bids still remain after all traders are assigned those bids will be refunded to the bidders.
TriangularChicken wrote: »hope they are going to ban the people that exploited the situation...
mook-eb16_ESO wrote: »any software that does this is a bot and people should be banned for using it.
mook-eb16_ESO wrote: »any software that does this is a bot and people should be banned for using it.
We all have our grievances with the zeni dev and support teams. But I think banning them would be an overreaction
Knootewoot wrote: »It was properly tested by @zos standards.
MartiniDaniels wrote: »It is not software it is server load. Exactly the same software dealt with multi-bidding properly, on PC NA.
DaveMoeDee wrote: »It worked fine on PC/NA.
Here is a simple example. A script might have a bunch of SQL queries. Any query that returns a list of results could cause a problem if the data is so large that the query errors out. If the script didn't check properly that each query finished without an error, the script could interpret the empty result from the failed query as a valid data set containing 0 items. The script could continue on and really bad things can happen. That error would never occur during testing that isn't at a large enough scale for the query to fail.
Wrong. The issues in the above example can be caught and tested with synthetic tests that do not require full integration to a QA environment.DaveMoeDee wrote: »"That error would never occur during testing that isn't at a large enough scale for the query to fail"
Very bad controls around segregating environments. You should never be able to run these kind of things directly in prod. You should have the script ready and validated in a lower environment (peer review, test runs, etc), and then systematically promote it to a higher environment, not allowing for any manual change in between to the script.DaveMoeDee wrote: »Context switching can lead to blunders. I remember dropping a table in a production database instead of on the server we were setting up to migrate to. I just lost track of which window what which.
It's exactly the same issue as above. If correct controls would have been in place, that script would have been first peer reviewed, validated in a lower environment which you can just throw away and rebuild with a push of a button if something blows up, and once confirmed that it works, only then would have been promoted to production.DaveMoeDee wrote: »I also remember I once had some SQL I had used countless times to insert records into a table. In this instance, I was going to compare a column to a variable instead of a literal number to filter down the number of values to insert. Instead of id=@id, I typed id=id. The DBA was very busy the next morning removing hundreds of millions of records.
I work as a software architect. At my work we can not afford such hiccups as it would have major consequences. And yet, we manage it. It's not difficult, if you have the correct controls and gates in place.
There are so many things wrong in your example:DaveMoeDee wrote: »It worked fine on PC/NA.
Here is a simple example. A script might have a bunch of SQL queries. Any query that returns a list of results could cause a problem if the data is so large that the query errors out. If the script didn't check properly that each query finished without an error, the script could interpret the empty result from the failed query as a valid data set containing 0 items. The script could continue on and really bad things can happen. That error would never occur during testing that isn't at a large enough scale for the query to fail.
- poor design (list of results could cause a problem if the data is so large)
- inadequate testing (no simulation of what happens if a query errors out)
- inadequate code review (if the script didn't check properly that each query finished without an error)
Wrong. The issues in the above example can be caught and tested with synthetic tests that do not require full integration to a QA environment.DaveMoeDee wrote: »"That error would never occur during testing that isn't at a large enough scale for the query to fail"Very bad controls around segregating environments. You should never be able to run these kind of things directly in prod. You should have the script ready and validated in a lower environment (peer review, test runs, etc), and then systematically promote it to a higher environment, not allowing for any manual change in between to the script.DaveMoeDee wrote: »Context switching can lead to blunders. I remember dropping a table in a production database instead of on the server we were setting up to migrate to. I just lost track of which window what which.It's exactly the same issue as above. If correct controls would have been in place, that script would have been first peer reviewed, validated in a lower environment which you can just throw away and rebuild with a push of a button if something blows up, and once confirmed that it works, only then would have been promoted to production.DaveMoeDee wrote: »I also remember I once had some SQL I had used countless times to insert records into a table. In this instance, I was going to compare a column to a variable instead of a literal number to filter down the number of values to insert. Instead of id=@id, I typed id=id. The DBA was very busy the next morning removing hundreds of millions of records.
What I see a problem in the industry is that many companies are not willing to accept that these are neccessary things (unless they are enforced by regulations in their industry) because they try to cut costs everywhere. Also many couch developers are simply not thinking about supportability, maintanibilty and manageability of their work, they just botch something together, hoping "it will be fine".
This is especially true in the gaming industry where they don't really have to design any critical systems... The "worst" that can happen is some bugs in their game that they will fix in the next patch. Or so they think...
DaveMoeDee wrote: »I don't disagree with anything you said. I am actually giving examples of horrible practices resulting from people taking shortcuts to get things done. I am just pointing out that either (1) the testing IS responsible or (2) someone running an ad hoc script is responsible.
Thing is, it often isn't about cost cutting. It is often about customers needing things that can't wait for the dev team. In my case, the court says client A needs to provide document set X by Friday or potentially be fined $100k. Do you wait for engineering to add a feature needed to get the proper documents ready? Not going to happen. Sometimes you have to decide if the risk is worth it.
The process was altered for the NA server trader flip. Why do I say this, observation.
I was standing at my guilds trader at the flip. Saw the tabbards disappear and the info of hired trader also cleared on my home guild home page. The tabbards as easy to refresh as you just need look away then back, the home page just close and reopen.
So since I saw both were gone and knowing how EU went. I tried to hire our now vacant trader but when I clicked the hire button received a message which I will paraphrase as I did not screen shot it, "Failed to Hire as Trader resolution is still ongoing." it was a little different than that but is what it meant.
Took about another 30 seconds and the Home page updated saying I had won a different Trader so I left it to go verify in that zone. Found that it was true and looked at refund logs, saw all amounts correct.
To me it looks like they did a lock down of the Traders until the flip was complete which is as it should have been in the first place. I would say it is a combination of the software and server load, several things were overlooked.