StructureCMS

April 14, 2015

Sitefinity First Thoughts

Filed under: .net, Programming — joel.cass @ 4:06 pm

I have been playing with Sitefinity for the last 2 weeks or so. Being from a Sitecore background I must say that it has been difficult to adjust, to say the least.

While not as slick, Sitefinity is a pretty capable content management system. It allows the website to be built from scratch without any HTML development, which is great, as long as you’re not working with a design company that is delivering HTML they want you to implement :)

So we enter the world of HTML rendering and development in Sitefinity, which unfortunately, is not so good.

There is basically no framework available for developing pages. It would be great if they copied the templates / sublayouts system in sitefinity, where placeholders could be used to define editable areas, and the areas configured in the parent template roll through to the children.

Sitefinity seems to take a “one size fits all approach”, which means that you have to use their modules, their styles, their naming conventions, their CSS.

So, what if:

  • You have an editable content area that appears in the navigation dropdown?
  • You have a responsive design that has been delivered with their own styles and contexts?
  • You only want to configure the navigation / css / and content that appears on all pages in one place?

None of these seem possible. Placeholders are only possible when put within master templates or using their bizarre patterns for layout widgets. It’s not as simple as putting an tag wherever you want.

Say that you have 3 templates – one for home, one for landing page, one for content. Essentially they all use the same outer elements – navigation, imagery, breadcrumbs. But the content layout changes – home is a complex layout, landing is one column with no side navigation, and the content page is two columns, with the left being a floating column.

I was hoping that I could either

  • Create a master page that has the outer elements configured and then implement the different layouts using nested master pages – not possible – even worse, the elements in the parent master page are no longer configurable
  • Use Layout Widgets to implement the different layouts – while this sort of worked I was unable to assign an ID to the element meaning the CSS would need hacking
  • Share configured ares between templates – I couldn’t find anything about it

So, what I’ve had to to is create 3 different master pages, that all need their own configuration. Not impressed so far.

Next, we’re onto integrating our Active Directory with Sitefinity. We were hoping that if the permissions were granted to certain roles, then they would be able to access the administration. But no, instead actual Sitefinity roles needed to be mapped to our equivalent roles and initialised upon startup, which was really frustrating.

And then we wanted to get onto creating our own role and membership providers because, surprise surprise, Sitefinity could not integrate with 2 different LDAP providers at the same time (which I admit is an edge case). We tried the ‘Data’ providers as recommended by Sitefinity and it was a living hell – because they were LINQ providers talking to a non-SQL platform, running a query to get a single user would get all (22,000+) records and then filter them down to one user – very ineffective. Furthermore only the membership provider was documented, the roles provider didn’t work and needed hacking, as discovered by other poor users.

In the end, implementing the standard ASP.net patterns for security providers worked, and it worked well. Heaven knows why the developers are recommending the implementation of data providers.

At any rate we have no choice but to keep rolling with the punches. I feel that choosing Sitefinity may not have been the best choice, but I’m hoping that this is just the usual sort of trouble that is experienced with new software and once we’re used to it, working with Sitefinity will be much easier than it is right now.

June 20, 2014

Entity Framework Headaches!

Filed under: .net — joel.cass @ 3:41 pm

I want to get away from query oriented code and especially the mess that can result from almost any ADO.Net implementation over time. I’ve been lucky to finally be able to work with the latest .net framework on a new project and thought I would give the Entity Framework (EF) a go. After all, it has worked quite well on other projects I have tried with MVC.

However, it seems that things are not so rosy when you come at it from a database-first perspective. Say that your database is designed by a proper DBA to have all the proper indexes, constraints, and data types in place, plus it’s locked down so you can’t manipulate with via code anyway. You would want to go database-first, right? Well, the tools we use should allow that.

I am using Visual Studio 2013, .net 4.5.1, and EF 6. And the experience has been anything but smooth.

First problem: Keys

Your Entity classes will not have keys. You will get the error “EntityType [Entity] has no key defined. Define the key for this EntityType”. But then you’re like, “the keys are there in the database! BLOODY HELL!”. So you look up, it seems you need to define the attribute [Key] above the field that represents the key in the database. Include the relevant namespace and you should be good to go, right? Wrong. Run a build and the class files are re-created, and changes are lost.

So what do you do? Open the edmx branch, find the relevant *.tt file, and:

Search for the string “simpleProperties”, then add the following like so (plus signs excluded):

var simpleProperties = typeMapper.GetSimpleProperties(entity);
    if (simpleProperties.Any())
    {
        foreach (var edmProperty in simpleProperties)
        {
+			if (ef.IsKey(edmProperty)) {
+				#>    [Key]
+<#		    }
#>
    <#=codeStringGenerator.Property(edmProperty)#>
<#
        }

..then, you will need to search for the method definition “UsingDirectives” and rewrite it as follows:

public string UsingDirectives(bool inHeader, bool includeCollections = true)
    {
        return inHeader == string.IsNullOrEmpty(_code.VsNamespaceSuggestion())
            ? string.Format(
                CultureInfo.InvariantCulture,
                "{0}using System;{1}{2}{3}",
                inHeader ? Environment.NewLine : "",
                Environment.NewLine + "using System.ComponentModel.DataAnnotations;",
                includeCollections ? (Environment.NewLine + "using System.Collections.Generic;") : "",
                inHeader ? "" : Environment.NewLine)
            : "";
    }

Build your project, and hopefully the classes come out right this time.

Second problem: Performance on bulk actions

OK, so it’s running now. Say that you want to do a bulk delete on a table that you were using for temporary data. Well, EF is not good at that at all. Deleting 2,000 records takes about 3 minutes. What about 20,000? Don’t even bother. So you’ll need to hack around it:

                /* TOO SLOW!
                IEnumerable<TempRecord> aryRecords = objContext.TempRecords;
                foreach (TempRecord r in aryRecords)
                {
                    objContext.TempRecords.Remove(r);
                }
                objContext.SaveChanges();
                */
                string strTableName = "TempRecord";
                DataContext.Database.ExecuteSqlCommand(String.Format("TRUNCATE TABLE {0}", strTableName));

So that works OK when connecting to the database, but what about when testing locally? Oops, next problem:

Second second problem: Database table names

When testing locally, the context will create a database using sqllocaldb.exe, that’s a nice idea. However this is where it fails: it creates the table names differently to the original schema. Say your table name was “TempRecord” (as some DB designers believe tables should NEVER be plurals), it will create the table in the temp database as “TempRecords”.

So begins the guessing game, as what makes it even worse is that the Entity Framework has NO METHOD FOR GETTING THE UNDERLYING TABLE NAME! DOUBLE BLOODY HELL!

So, what do you have to do? Run a fake query and then parse the SQL for the table name:

        public string GetTableName(DbSet dbset)
        {
            string sql = dbset.ToString();
            Regex regex = new Regex("FROM (?<table>.*) AS");
            Match match = regex.Match(sql);

            string table = match.Groups["table"].Value;
            return table;
        }

…and then update our preceding code:

                /* TOO SLOW!
                IEnumerable<TempRecord> aryRecords = objContext.TempRecords;
                foreach (TempRecord r in aryRecords)
                {
                    objContext.TempRecords.Remove(r);
                }
                objContext.SaveChanges();
                */
                string strTableName = objContext.GetTableName(objContext.TempRecords);
                objContext.Database.ExecuteSqlCommand(String.Format("TRUNCATE TABLE {0}", strTableName));

Another problem solved.

Third problem: Schema changes

Finally, what happens when the schema changes? Easy, you just update the EDMX from the database. It all works, then you decide to run some tests and you get the error “Model backing [Context] context has changed since database was created” TRIPLE BLOODY HELL. So, you delete all the files you can find that reference the old model. But that’s not actually the problem, it’s the test database!

So what can you do? What makes it even worse is that Visual Studio does not expose this test database in any way, it’s like the localdb instance is a dirty little secret it does not want to give away. You have to open the command prompt (or powershell in my instance as the SQLLocalDB.exe was not in my cmd path), and run the following commands:

// list databases (in my case, it was using "v11.0")
SQLLocalDB info
// stop database
SQLLocalDB stop v11.0
// delete database
SQLLocalDB delete v11.0

…and then run your test. Hopefully, Success!

Finally.

Even though this was horribly frustrating I feel that the EF is still the way to go. I just wish that Microsoft had spent that little bit of extra time QA’ing the database-first approach and the issues that arise in Visual Studio when testing using the local database. And it wouldn’t be a bad idea to have consistent underlying object (e.g. table) names, or at least expose them via the API somehow.

I don’t know where I’d be without the Internet, from which most of these problems were solved. It would be a long, difficult road otherwise. The Microsoft documentation leaves little to be desired from all fronts when it came to resolving these issues.

May 26, 2010

.Net based HTTP Client in ColdFusion?!

Filed under: .net, ColdFusion — joel.cass @ 5:12 pm

I’ve been banging my head up against the metaphorical walls around here for ages trying to get ColdFusion to access websites via a proxy server that only supports NTLM authentication.

Short answer: don’t bother. CFHTTP does not support NTLM Authentication. Most of the Java libraries claiming to do so are hopeless. Support is inconsistent because no-one knows anything about the standard. Except Microsoft.

So, it only came naturally that the best way to solve the issue would be to use .net – and now that ColdFusion has a gateway to .net components, I could actually write something that solves the problem!

So, what I have done is written a wrapper that can be accessed by ColdFusion, and a simple custom tag to finish it off.

Some more information regarding download and implementation is in the Projects section.