closing tag is in template navbar
timefactors watches



TZ-UK Fundraiser
Results 1 to 15 of 15

Thread: Windows server 2008r2 vs server 2012

  1. #1
    Master
    Join Date
    Jun 2013
    Location
    liverpool UK
    Posts
    1,770

    Windows server 2008r2 vs server 2012

    Our company develops and sells Software to SME's. We also sell and install the hardware to run this within the business.

    We have been installing server 2008r2 for years but now we have tested 2012 and have been installing this for the last 6 months or so.

    Our lead Tech has asked that all servers need to be much beefier for 2012. Suggesting that 2012 needs around double the Ram for the same amount of users.

    I'm not convinced and can't find any evidence online to suggest either way.

    Do any of you guys have a load of MS knowledge I can lean on?

  2. #2
    Master alfat33's Avatar
    Join Date
    Aug 2015
    Location
    London
    Posts
    6,199
    I think it all depends on your company's software, how it works and uses system resources, which edition of Server 2012 you install and what kind of hardware your customers have (not just RAM) etc.

    This is meant in kindly way: why wouldn't you trust your lead Tech? Wouldn't he or she be the best informed in this case, based on the test results and practical user experience?

  3. #3
    Master
    Join Date
    Jun 2013
    Location
    liverpool UK
    Posts
    1,770
    We install server 2012 standard as well as 2008 r2 standard. We used to be able to use 2008r2 ROK for less than 5 terminals but now can't use the equivalent 2012 foundation.

    The guy just doesn't have any specific MS knowledge and I just feel like he is being super cautious and over speccing everything meaning I am supposed to sell hugely expensive 64gb Xeon servers with twin 1TB SSD's to small businesses.

  4. #4
    Master alfat33's Avatar
    Join Date
    Aug 2015
    Location
    London
    Posts
    6,199
    OK thanks. Probably not much I can add then. Best of luck.

  5. #5
    Master
    Join Date
    Jun 2015
    Location
    Edinburgh
    Posts
    3,040
    Blog Entries
    1
    Once you get above 4GB the amount of RAM depends solely on the use of the server and has almost nothing to do with the operating system.

    To give you an idea - here's our current averages for physical Windows 2012 servers:

    Domain controllers for a domain with 10000 users - 4GB
    Remote site file servers - 4GB
    Lightly used web servers - 4GB
    Heavily used web servers - 8GB
    SQL server cluster 16GB--48GB per node
    Application servers 4GB to 16GB depending on application.

    Most of our servers however are virtual machines on VMware vSphere and are assigned one CPU and 4GB by default regardless of purpose. Memory is then increased as required acorinig to server performance. Out of ~500 servers, we have 10 on which this has been necessary

    If your system hosts database and application on the same physical server, then this WILL however increase system requirements, but certainly Xeon/64GB seems excessive for all but tier 1 applications.


    On a more general point, we're seeing this attitude from a lot of software suppliers - they ask for crazy ram levels and high performance servers when nearly all the time real world usage doesn't justify it. It's pure laziness and is often used to cover up a poor application - our analysis (from a server hosting perspective) is that often when this kind of RAM/CPU level is ACTUALLY used it's caused by a lack of prowess in the application coding rather than the demands made of the system.
    Last edited by Scepticalist; 22nd February 2017 at 08:48.

  6. #6
    Master dice's Avatar
    Join Date
    Feb 2015
    Location
    London, UK
    Posts
    1,564
    Scepticalist has got it on the money. You really need to know what the intended use is. As an extreme example I recently built a 2012 server spec'd our with 8GB RAM that is a DC, DHCP server, hosts the local DNS, Print Server, and is also a WSUS repository. Bad practice yes, but for a small business it did what they needed, with the full disclosure that it really isn't scalable without making further changes.

    Conversely, most SQL servers I've worked on are upward of 64GB RAM. SQL will, by nature, use 90% of it.

  7. #7
    Master
    Join Date
    Jun 2013
    Location
    liverpool UK
    Posts
    1,770
    Our software is practice management software running on SQL database acessed from each terminal in a terminal server fashion.

    I know this is a Vague explanation I think what I am trying to get at is that when we were speccing 2008r2 the system requirements from our tech guys were miles lower than what we have to spec now in 2012.

    Is the RAM overhead in 2012 that much higher than 2008?

  8. #8
    Master petethegeek's Avatar
    Join Date
    Jul 2011
    Location
    Worcestershire
    Posts
    2,930
    Quote Originally Posted by Scepticalist View Post
    On a more general point, we're seeing this attitude from a lot of software suppliers - they ask for crazy ram levels and high performance servers when nearly all the time real world usage doesn't justify it. It's pure laziness and is often used to cover up a poor application - our analysis (from a server hosting perspective) is that often when this kind of RAM/CPU level is ACTUALLY used it's caused by a lack of prowess in the application coding rather than the demands made of the system.
    Wirth's law - "...which states that software is getting slower more rapidly than hardware becomes faster."

  9. #9
    Master
    Join Date
    Jun 2015
    Location
    Edinburgh
    Posts
    3,040
    Blog Entries
    1
    Quote Originally Posted by bigweb View Post
    Our software is practice management software running on SQL database acessed from each terminal in a terminal server fashion.

    I know this is a Vague explanation I think what I am trying to get at is that when we were speccing 2008r2 the system requirements from our tech guys were miles lower than what we have to spec now in 2012.

    Is the RAM overhead in 2012 that much higher than 2008?
    No the overhead has hardly changed.

  10. #10
    Craftsman
    Join Date
    Jul 2008
    Location
    Barnsley, UK
    Posts
    296
    Sounds like a load of tosh to me. Also begs the question why not Server 2012R2 or 2016?

  11. #11
    Grand Master markrlondon's Avatar
    Join Date
    Feb 2009
    Location
    London, England
    Posts
    25,354
    Blog Entries
    26
    Quote Originally Posted by mattbeef View Post
    Sounds like a load of tosh to me. Also begs the question why not Server 2012R2 or 2016?
    Indeed.

    And why not properly benchmark the system so it is possible to objectively specify suitable hardware, host OS, and virtualised system on which it could run? I can't see why it is necessary to guess or tolerate any uncertainty for longer than it takes to do the testing necessary, nowadays.

  12. #12
    Why not skip 2012 and go to 2016? (ducks) then you have the option of nano server and other cool things like windows storage server 2016 etc...

    there's a free Microsoft two-day tech summit in March in Birmingham - should answer a few questions

    https://www.microsoft.com/en-gb/tech...irmingham.aspx

  13. #13
    Grand Master markrlondon's Avatar
    Join Date
    Feb 2009
    Location
    London, England
    Posts
    25,354
    Blog Entries
    26
    Quote Originally Posted by markrlondon View Post
    Indeed.

    And why not properly benchmark the system so it is possible to objectively specify suitable hardware, host OS, and virtualised system on which it could run? I can't see why it is necessary to guess or tolerate any uncertainty for longer than it takes to do the testing necessary, nowadays.
    Which possibly sounds a bit aggressive, so apologies if so.

    It's just that objective measurements are really helpful so there can be no doubt about allowable hardware/OS/software combinations. It takes some time and effort of course but that's part of supplying the entire package of turnkey software.

  14. #14
    Master
    Join Date
    Jun 2013
    Location
    liverpool UK
    Posts
    1,770
    Thanks for the advice guys.

    It sounds like they are just being super cautious and over speccing things as I thought.

    This just makes the software harder to sell which can be painful anyway.

  15. #15
    Master
    Join Date
    Jun 2014
    Location
    Yorkshire
    Posts
    1,132
    It's pure laziness and is often used to cover up a poor application
    You've hit the nail on the head with that one! seen it time and again where people have to send a fortune on hardware to get simple programs to work.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  

Do Not Sell My Personal Information