StructureCMS

April 14, 2015

Sitefinity First Thoughts

Filed under: .net, Programming — joel.cass @ 4:06 pm

I have been playing with Sitefinity for the last 2 weeks or so. Being from a Sitecore background I must say that it has been difficult to adjust, to say the least.

While not as slick, Sitefinity is a pretty capable content management system. It allows the website to be built from scratch without any HTML development, which is great, as long as you’re not working with a design company that is delivering HTML they want you to implement :)

So we enter the world of HTML rendering and development in Sitefinity, which unfortunately, is not so good.

There is basically no framework available for developing pages. It would be great if they copied the templates / sublayouts system in sitefinity, where placeholders could be used to define editable areas, and the areas configured in the parent template roll through to the children.

Sitefinity seems to take a “one size fits all approach”, which means that you have to use their modules, their styles, their naming conventions, their CSS.

So, what if:

  • You have an editable content area that appears in the navigation dropdown?
  • You have a responsive design that has been delivered with their own styles and contexts?
  • You only want to configure the navigation / css / and content that appears on all pages in one place?

None of these seem possible. Placeholders are only possible when put within master templates or using their bizarre patterns for layout widgets. It’s not as simple as putting an tag wherever you want.

Say that you have 3 templates – one for home, one for landing page, one for content. Essentially they all use the same outer elements – navigation, imagery, breadcrumbs. But the content layout changes – home is a complex layout, landing is one column with no side navigation, and the content page is two columns, with the left being a floating column.

I was hoping that I could either

  • Create a master page that has the outer elements configured and then implement the different layouts using nested master pages – not possible – even worse, the elements in the parent master page are no longer configurable
  • Use Layout Widgets to implement the different layouts – while this sort of worked I was unable to assign an ID to the element meaning the CSS would need hacking
  • Share configured ares between templates – I couldn’t find anything about it

So, what I’ve had to to is create 3 different master pages, that all need their own configuration. Not impressed so far.

Next, we’re onto integrating our Active Directory with Sitefinity. We were hoping that if the permissions were granted to certain roles, then they would be able to access the administration. But no, instead actual Sitefinity roles needed to be mapped to our equivalent roles and initialised upon startup, which was really frustrating.

And then we wanted to get onto creating our own role and membership providers because, surprise surprise, Sitefinity could not integrate with 2 different LDAP providers at the same time (which I admit is an edge case). We tried the ‘Data’ providers as recommended by Sitefinity and it was a living hell – because they were LINQ providers talking to a non-SQL platform, running a query to get a single user would get all (22,000+) records and then filter them down to one user – very ineffective. Furthermore only the membership provider was documented, the roles provider didn’t work and needed hacking, as discovered by other poor users.

In the end, implementing the standard ASP.net patterns for security providers worked, and it worked well. Heaven knows why the developers are recommending the implementation of data providers.

At any rate we have no choice but to keep rolling with the punches. I feel that choosing Sitefinity may not have been the best choice, but I’m hoping that this is just the usual sort of trouble that is experienced with new software and once we’re used to it, working with Sitefinity will be much easier than it is right now.

June 20, 2014

Entity Framework Headaches!

Filed under: .net — joel.cass @ 3:41 pm

I want to get away from query oriented code and especially the mess that can result from almost any ADO.Net implementation over time. I’ve been lucky to finally be able to work with the latest .net framework on a new project and thought I would give the Entity Framework (EF) a go. After all, it has worked quite well on other projects I have tried with MVC.

However, it seems that things are not so rosy when you come at it from a database-first perspective. Say that your database is designed by a proper DBA to have all the proper indexes, constraints, and data types in place, plus it’s locked down so you can’t manipulate with via code anyway. You would want to go database-first, right? Well, the tools we use should allow that.

I am using Visual Studio 2013, .net 4.5.1, and EF 6. And the experience has been anything but smooth.

First problem: Keys

Your Entity classes will not have keys. You will get the error “EntityType [Entity] has no key defined. Define the key for this EntityType”. But then you’re like, “the keys are there in the database! BLOODY HELL!”. So you look up, it seems you need to define the attribute [Key] above the field that represents the key in the database. Include the relevant namespace and you should be good to go, right? Wrong. Run a build and the class files are re-created, and changes are lost.

So what do you do? Open the edmx branch, find the relevant *.tt file, and:

Search for the string “simpleProperties”, then add the following like so (plus signs excluded):

var simpleProperties = typeMapper.GetSimpleProperties(entity);
    if (simpleProperties.Any())
    {
        foreach (var edmProperty in simpleProperties)
        {
+			if (ef.IsKey(edmProperty)) {
+				#>    [Key]
+<#		    }
#>
    <#=codeStringGenerator.Property(edmProperty)#>
<#
        }

..then, you will need to search for the method definition “UsingDirectives” and rewrite it as follows:

public string UsingDirectives(bool inHeader, bool includeCollections = true)
    {
        return inHeader == string.IsNullOrEmpty(_code.VsNamespaceSuggestion())
            ? string.Format(
                CultureInfo.InvariantCulture,
                "{0}using System;{1}{2}{3}",
                inHeader ? Environment.NewLine : "",
                Environment.NewLine + "using System.ComponentModel.DataAnnotations;",
                includeCollections ? (Environment.NewLine + "using System.Collections.Generic;") : "",
                inHeader ? "" : Environment.NewLine)
            : "";
    }

Build your project, and hopefully the classes come out right this time.

Second problem: Performance on bulk actions

OK, so it’s running now. Say that you want to do a bulk delete on a table that you were using for temporary data. Well, EF is not good at that at all. Deleting 2,000 records takes about 3 minutes. What about 20,000? Don’t even bother. So you’ll need to hack around it:

                /* TOO SLOW!
                IEnumerable<TempRecord> aryRecords = objContext.TempRecords;
                foreach (TempRecord r in aryRecords)
                {
                    objContext.TempRecords.Remove(r);
                }
                objContext.SaveChanges();
                */
                string strTableName = "TempRecord";
                DataContext.Database.ExecuteSqlCommand(String.Format("TRUNCATE TABLE {0}", strTableName));

So that works OK when connecting to the database, but what about when testing locally? Oops, next problem:

Second second problem: Database table names

When testing locally, the context will create a database using sqllocaldb.exe, that’s a nice idea. However this is where it fails: it creates the table names differently to the original schema. Say your table name was “TempRecord” (as some DB designers believe tables should NEVER be plurals), it will create the table in the temp database as “TempRecords”.

So begins the guessing game, as what makes it even worse is that the Entity Framework has NO METHOD FOR GETTING THE UNDERLYING TABLE NAME! DOUBLE BLOODY HELL!

So, what do you have to do? Run a fake query and then parse the SQL for the table name:

        public string GetTableName(DbSet dbset)
        {
            string sql = dbset.ToString();
            Regex regex = new Regex("FROM (?<table>.*) AS");
            Match match = regex.Match(sql);

            string table = match.Groups["table"].Value;
            return table;
        }

…and then update our preceding code:

                /* TOO SLOW!
                IEnumerable<TempRecord> aryRecords = objContext.TempRecords;
                foreach (TempRecord r in aryRecords)
                {
                    objContext.TempRecords.Remove(r);
                }
                objContext.SaveChanges();
                */
                string strTableName = objContext.GetTableName(objContext.TempRecords);
                objContext.Database.ExecuteSqlCommand(String.Format("TRUNCATE TABLE {0}", strTableName));

Another problem solved.

Third problem: Schema changes

Finally, what happens when the schema changes? Easy, you just update the EDMX from the database. It all works, then you decide to run some tests and you get the error “Model backing [Context] context has changed since database was created” TRIPLE BLOODY HELL. So, you delete all the files you can find that reference the old model. But that’s not actually the problem, it’s the test database!

So what can you do? What makes it even worse is that Visual Studio does not expose this test database in any way, it’s like the localdb instance is a dirty little secret it does not want to give away. You have to open the command prompt (or powershell in my instance as the SQLLocalDB.exe was not in my cmd path), and run the following commands:

// list databases (in my case, it was using "v11.0")
SQLLocalDB info
// stop database
SQLLocalDB stop v11.0
// delete database
SQLLocalDB delete v11.0

…and then run your test. Hopefully, Success!

Finally.

Even though this was horribly frustrating I feel that the EF is still the way to go. I just wish that Microsoft had spent that little bit of extra time QA’ing the database-first approach and the issues that arise in Visual Studio when testing using the local database. And it wouldn’t be a bad idea to have consistent underlying object (e.g. table) names, or at least expose them via the API somehow.

I don’t know where I’d be without the Internet, from which most of these problems were solved. It would be a long, difficult road otherwise. The Microsoft documentation leaves little to be desired from all fronts when it came to resolving these issues.

January 27, 2012

Using the SmarterStats Query API service in ColdFusion

Filed under: ColdFusion, Programming — joel.cass @ 3:53 pm

I must admit that it’s been a while since I’ve posted anything in this blog. To be honest, it’s mainly because there’s been nothing remarkable to post about. Nothing anyone would benefit from knowing, really.

It’s even to the point that my role involves much more than just programming these days – recently I have been involved in reviewing web statistics packages in the hope of landing on a solution. Well, the solution is found in SmarterStats – quite a remarkable web statistics package that does most things. It even has an API.

About that API – it has a largely undocumented ‘Query’ webservice. It is not enabled by default, but once enabled it allows you to query the statistics data via a web service.

To enable the service, you will need to create an authorisation key. A good one to use is a UUID. Then, you need to define that key in [SmarterStats_Root]\MRS\App_Data\Config\AppConfig.xml:

    <LocalHostDeleted>false</LocalHostDeleted>
++  <WebServiceAuthorizationCode>nnnnnnnn-nnnn-nnnn-nnnnnnnnnnnnnnnn</WebServiceAuthorizationCode>
    <ExpirationNotification />

Then you can access the service via http://localhost:9999/Services/Query.asmx, WSDL path http://localhost:9999/Services/Query.asmx?wsdl

The query service uses some form of psuedo-sql that runs a query against what looks like a function result, ie. SELECT * FROM fTableName(siteId, dateFrom, dateTo, maxItems, extraParam**) WHERE blah=blah ORDER BY blah.

** Parameters maxItems and extraParam are undocumented and were found by chance. MaxItems is pretty obvious, it’s the number of records to retrieve. Use extraParam to filter certain queries by page. Only certain queries support the extra parameter so try them out.

A list of tables is available in the file [SmarterStats_Root]\MRS\App_Data\Config\ReportConfig.xml – you can copy this to your development directory and then use it to generate a list of available reports.

Here are some example reports you can call using the web service (assume site id = 1):

  • Get Daily Site Traffic
    SELECT * FROM fActivityTotalTrend(1, '2012-01-01T00:00:00', '2012-01-27T00:00:00', 50)
  • Get Daily traffic to home page
    SELECT * FROM fDailyActivityForFile(1, '2012-01-01T00:00:00', '2012-01-27T00:00:00', 20, '/')
  • All pages, ordered by popularity
    SELECT * FROM fTopPages(1, '2012-01-01T00:00:00', '2012-01-27T00:00:00')
  • Top 10 most popular pages
    SELECT * FROM fTopPages(1, '2012-01-01T00:00:00', '2012-01-27T00:00:00', 10)
  • Web browsers
    SELECT * FROM fBrowsers(1, '2012-01-01T00:00:00', '2012-01-27T00:00:00')
  • Visitors by City
    SELECT * FROM fGeographicCountryByCity(1, '2012-01-01T00:00:00', '2012-01-27T00:00:00')
  • Visitors by City, to a certain URL
    SELECT * FROM fGeographicsByFile(1, '2012-01-01T00:00:00', '2012-01-27T00:00:00', 20, '/some/url/')

Calling the web service from coldfusion is pretty easy:

<cfsetting enablecfoutputonly="true">

<!--- 'static' service parameters --->
<cfset strUsername = "username">
<cfset strPassword = "password">
<cfset strAuthCode = "nnnnnnnn-nnnn-nnnn-nnnnnnnnnnnnnnnn">
<cfset strWebServiceUrl = "http://127.0.0.1:9999/Services/Query.asmx?wsdl">

<!--- initialise web service --->
<cfset objWebService = createObject("webservice", strWebServiceUrl)>

<!--- query parameters (you can pass these in from a form) --->
<cfset strReport = "fTopPages">
<cfset numSiteId = 1>
<cfset dateFrom = createDate(year(now()), month(now()), 1)>
<cfset dateTo = createDate(year(now()), month(now()), day(now()))>
<cfset numRows = 20>

<!--- compose query --->
<cfset strQuery = "SELECT * FROM #strReport#(#numSiteId#, '#dateformat(dateFrom,'yyyy-mm-dd')#T00:00:00', '#dateformat(dateTo,'yyyy-mm-dd')#T00:00:00')">

<!--- execute query --->
<cfset dsResult = objWebservice.executeQuery(strUsername, strPassword, strAuthCode, numSiteId, strQuery, numRows)>

<!--- convert to query --->
<cfset qryResult = datasetToQuery(dsResult, 'Table1')><!--- version 7: it is 'Table1', version 6: it is 'results' --->

<!--- output result --->
<cfdump var="#qryResult#">

<!--- FUNCTION TO CONVERT .NET DATASET TO QUERY (added here for convenience - move to helper class) --->

<cffunction name="datasetToQuery" access="public" returntype="query" output="false">
	<cfargument type="any" name="dataset" required="true">
	<cfargument type="string" name="tablename" required="true">

	<!--- dataset has 2 nodes: 1) Column definitions 2) Data --->

	<cfset var qryResult = "">
	<cfset var lstColumns = "">
	<cfset var lstTypes = "">
	<cfset var aryDataset = ARGUMENTS.dataset.get_any()>
	<cfset var aryColumns = XmlSearch(aryDataset[1].getAsString(), "/xs:schema/xs:element[@name='#ARGUMENTS.tablename#']/xs:complexType/xs:sequence/xs:element")>
	<cfset var aryData = XmlSearch(aryDataset[2].getAsString(), "/diffgr:diffgram/NewDataSet/#ARGUMENTS.tablename#")>
	<cfset var i = 0>
	<cfset var c = 0>

	<!--- get columns --->
	<cfloop from="1" to="#arrayLen(aryColumns)#" index="i">
		<cfset lstColumns = listAppend(lstColumns, aryColumns[i].xmlAttributes.name)>
		<cfswitch expression="#aryColumns[i].xmlAttributes.type#">
			<cfcase value="xs:double,xs:long">
				<cfset lstTypes = listAppend(lstTypes, 'double')>
			</cfcase>
			<cfcase value="xs:date">
				<cfset lstTypes = listAppend(lstTypes, 'timestamp')>
			</cfcase>
			<cfdefaultcase>
				<cfset lstTypes = listAppend(lstTypes, 'varchar')>
			</cfdefaultcase>
		</cfswitch>
	</cfloop>

	<!--- create query object --->
	<cfset qryResult = queryNew(lstColumns, lstTypes)>

	<!--- populate query --->
	<cfloop from="1" to="#arrayLen(aryData)#" index="i">
		<cfset queryAddRow(qryResult)>
		<cfloop from="1" to="#arrayLen(aryData[i].xmlChildren)#" index="c">
			<cfset querySetCell(qryResult, aryData[i].xmlChildren[c].xmlName, aryData[i].xmlChildren[c].xmlText)>
		</cfloop>
	</cfloop>

	<cfreturn qryResult>
</cffunction>

<cfsetting enablecfoutputonly="false">

All this was tested in SmarterStats 7.

August 17, 2011

ColdFusion on Linux – Make sure your hostname is correct!

Filed under: ColdFusion, Technology — joel.cass @ 10:52 am

Recently I have been given the task to install ColdFusion on CentOS. Everything went well, Apache installed fine, related dependencies installed fine, even ColdFusion installed fine. Until I tried accessing the site, upon which I was presented with this error:

java.lang.NullPointerException
	at java.lang.String.indexOf(String.java:1733)
	at java.lang.String.indexOf(String.java:1715)
	at jrun.servlet.session.SessionService.getUrlSessionID(SessionService.java:1097)
	at jrun.servlet.ForwardRequest.getRequestedSessionId(ForwardRequest.java:426)
	at jrun.servlet.ForwardRequest.isRequestedSessionIdValid(ForwardRequest.java:467)
	at jrun.servlet.ForwardRequest.getSession(ForwardRequest.java:344)
	at jrun.servlet.ForwardRequest.create(ForwardRequest.java:135)
	at jrun.servlet.JRunRequestDispatcher.invoke(JRunRequestDispatcher.java:253)
	at jrun.servlet.ServletEngineService.dispatch(ServletEngineService.java:543)
	at jrun.servlet.http.WebService.invokeRunnable(WebService.java:172)
	at jrunx.scheduler.ThreadPool$ThreadThrottle.invokeRunnable(ThreadPool.java:428)
	at jrunx.scheduler.WorkerThread.run(WorkerThread.java:66)

I had installed ColdFusion 9, in a similar method to that described here. So, I tried installing ColdFusion 8 following the same method. Still got the error. I then tried installing ColdFusion 9 to use the root user (not recommended). Still got the error.

So I pulled my hair out for a bit, and then started scanning logs, upon which I came across this line:

08/16 23:23:54 error centostemplate.xxxx.xxx.au: centostemplate.xxxx.xxx.au
java.net.UnknownHostException: centostemplate.xxxx.xxx.au: centostemplate.xxxx.xxx.au

…and then it all fell together! The instance I had been given was created off a template, in which the host name was set in /etc/sysconfig/network to ‘centostemplate.xxxx.xxx.au’ – which did not resolve via DNS! So, the easy fix was to map this over in /etc/hosts to localhost, i.e.

127.0.0.1    centostemplate.xxxx.xxx.au

Restart the services, fixed! This is CentOS in my case, but I think if you ever run into this problem on a *nix platform, check your network config and ensure that your configured hostname resolves to an IP address.

February 9, 2011

Retrieving HTTP URLs in PHP

Filed under: PHP — joel.cass @ 9:32 am

It’s strange how many different ways there are to do the same thing in PHP. For example,if you want to retrieve a URL, it can be as easy as calling file_get_contents($url), or you can use the PECL libraries, or you can go dig up an open source project such as this one.

I was messing around one night and figured it would be possible to just run an http request over a socket. As it turns out it’s not so difficult, there is tons of information out there on how to do it and it wasn’t long before I had a method figured out.

The advantage of this is that it is lightweight and gives you some control over the headers (etc) that you want to send/receive. This has only been tested on text-only requests.

function get_http_content ($url, $timeout = 3, $headers = array()) {
	// initialise return variable
	$stcReturn = array("headers"=>array(), "content"=>"");

	// get server name, port, path from URL
	$strRegex = "/^(http[s]?)\:\/\/([^\:\/]+)[\:]?([0-9]*)(.*)$/";
	$strServer = preg_replace($strRegex,"$2",$url);
	$strPath = preg_replace($strRegex,"$4",$url);
	$numPort = preg_replace($strRegex,"$3",$url);
	if ($numPort == "") {
		if (preg_replace($strRegex,"$1",$url) == "https") {
			$stcReturn["headers"]["Status-Code"] = "0";
			$stcReturn["headers"]["Status"] = "HTTPS is not supported";
			$stcReturn["content"] = "Error: HTTPS is not supported";
		} else {
			$numPort = 80;
		}
	}

	// connect to server, run request
	$objSocket = fsockopen($strServer, $numPort, $numError, $strError, $timeout);
	if (!$objSocket) {
		// connection not possible
		$stcReturn["headers"]["Status-Code"] = $numError;
		$stcReturn["headers"]["Status"] = $strError;
		$stcReturn["content"] = "Error: {$strError} ({$numError})";
	} else {
		// connection made - send headers
		$strOut = "GET {$strPath} HTTP/1.1\r\n";
		$strOut .= "Host: {$strServer}\r\n";
		$strOut .= "Connection: Close\r\n";
		foreach ($headers as $strName=>$strValue) {
		$strOut .= "$strName: $strValue\r\n";
	}
	$strOut .= "\r\n";
	// get data
	fwrite($objSocket, $strOut);
	$strIn = "";
	while (!feof($objSocket)) {
		$strIn .= fgets($objSocket, 128);
	}
	fclose($objSocket);

	// split data into lines
	$aryIn = explode("\r\n", $strIn);

	// data is split into headers/content by double CR
	$bHeader = true;
		foreach ($aryIn as $i=>$strLine) {
			if ($i == 0) {
				// first line is [protocol] [status code] [status]
				$stcReturn["headers"]["Protocol"] = preg_replace("/^([^ ]+) ([^ ]+) (.+)$/", "$1", $strLine);
				$stcReturn["headers"]["Status-Code"] = preg_replace("/^([^ ]+) ([^ ]+) (.+)$/", "$2", $strLine);
				$stcReturn["headers"]["Status"] = preg_replace("/^([^ ]+) ([^ ]+) (.+)$/", "$3", $strLine);
			} elseif ($bHeader && $strLine == "") {
				// if line is empty headers have ended
				$bHeader = false;
			} elseif ($bHeader) {
				// set header
				$stcReturn["headers"][preg_replace("/^([^\:]+)\:[ ]*(.+)$/", "$1", $strLine)] = preg_replace("/^([^\:]+)\:[ ]*(.+)$/", "$2", $strLine);
			} else {
				// set content
				$stcReturn["content"] .= $strLine;
				if ($i < count($aryIn)-1) {
					$stcReturn["content"] .= "\r\n";
				}
			}
		}
	}
	return $stcReturn;
}

December 7, 2010

Configuring ColdFusion to have different JVM Settings per instance

Filed under: ColdFusion, Programming — joel.cass @ 9:08 am

Recently I had upgraded a ColdFusion server from standalone to multiple instance. This was easy – basically a matter of installing a new copy of ColdFusion as multi instance and copying the settings from the standalone instance. However, the issue has now arisen that every app on the server is to run as it’s own instance and if they all share the same settings, there will not be enough memory to run each instance smoothly.

The problem is, that some instances will require more memory while some will require less. Adobe had posted how to do this on their website but it seems to have been deleted recently. Luckily the instructions were still available google cache! I’ve copied the instructions here for later reference. As I will forget…

Basically, it’s three steps:

  1. Open up the JRun/bin directory
  2. Copy jvm.config to jvm_<server_name>.config
  3. Configure the startup script by
    • Windows: use jrunsvc -remove "<service_name>", then jrunsvc -install <server_name> <server_name> "<service_name>" -config jvm_<server_name>.config
    • Linux/Unix/Mac: add -config jvm_<server_name>.config to the startup command, e.g. jrun -start default -config jvm_<server_name>.config

* <server_name> is the name of the folder under the JRun/servers directory that contains the server, e.g “cfusion”
** <service_name> is the name of the service in windows, e.g. “Macromedia JRun cfusion Service”

November 5, 2010

Setting up an extranet login page in Sitecore

Filed under: Sitecore — joel.cass @ 8:30 am

Recently, I had issues with the setup of a public logon page in Sitecore. The setup was very similar to the way that login work in the Intranet solution, e.g.

1. A login.aspx page is created in the project folder
2. Settings are added to the web.config <site> tag: loginPage=”/login.aspx” + requireLogin=”true”
3. The login page either displays a form or is secured in IIS and then gets the AUTH_USER header to login users (if implmenting an AD solution)

The problem is, so it seems, that the latest version of sitecore (6.2) works differently from previous versions as documented here and here. The URL parameters item, user, and site are no longer passed. Furthermore, adding a SecurityResolver pipeline didn’t seem to work any longer.

So in 6.2, when a user cannot be authenticated to access a page, they are simply redirected to /login.aspx without any return URL or other useful information. This makes the situation even worse if you are trying to preview a page from the administration interface – basically every initial request is redirected to /login.aspx, and once authenticated the user is returned to the home page, as the original URL was lost when the user was redirected to /login.aspx.

Things seemed futile until a text search of the various config’s revealed the following setting in the web config:

      <!--  SAVE RAW URL ON LOGIN
            Specifies whether the original request URL is passed to the login page
            (saved in 'url' query string parameter).
            Default: false
      -->
      <setting name="Authentication.SaveRawUrl" value="false" />

Changing this setting to “true” now means that the return URL is passed through to /login.aspx as the ‘url’ querystring parameter. You’ll need to modify your login.aspx to look for this parameter and decode the parameter using Server.UrlDecode before redirecting.

This solution is simpler than the previous options available. It’s probably documented somewhere, I just never got a chance to read about it. I hope this is of help to anyone else who may be facing the same issues.

October 26, 2010

Setting the editor stylesheet in Sitecore

Filed under: Sitecore — joel.cass @ 11:02 am

Recently I was working on a site in Sitecore, and was thinking that it would be great if the editor stylesheet could be changed in the system.

Searching the Internet was generally fruitless, and looking through the core data, I couldn’t find any stylesheet config strings. Then, I stumbled upon the SIP Intranet guide, which pointed out that the editor stylesheet is in the web.config:

(section 4.2.1) The stylesheet that is used for the styling of the rich text fields within the editor is determined by the WebStylesheet setting in the web.config file.

And there it is:



Changing this field did actually update the CSS used in the editor, as well as populate the styles listed.

October 6, 2010

SQL Server 2000 Replication – Good one Microsoft

Filed under: Database — joel.cass @ 3:56 pm

One thing that I hate having to deal with, is replication in SQL Server 2000. It just seems like it’s half finished, and no-one bothered to think about what they’re doing when they wrote it. I’m not going to profess that I’m an expert in the area, I just think that it should have been done differently.

Recently I’ve had to set up replication going both ways between two servers. I had been recommended to stick to transactional replication, as it is being used on other databases set up similarly. One problem I have been having with transactional replication however, is that if a table is replicated both ways, the transactions related to the replication of data will be replicated, resulting in a horrible circular reference and before too long, full logs and no more disk space.

Furthermore, you will get random errors that occur anytime new data is inserted into a table on either database: “Cannot insert duplicate key row in object [blah]” or “Violation of [blah] constraint ‘[blah]‘. Cannot insert duplicate key in object ‘[blah]‘.”. Microsoft erroneously suggest that you add the term “-SkipErrors 2601;2627″ to the startup of the Distribution agent. Wrong. It should be “-SkipErrors 2601:2627″ – and, no error occurs on startup if the parameter is incorrect.

So, solving the full logs issue? You will need to stop both agents from running continuously, and schedule one or both of the transfers to happen every [x] minutes, otherwise the transactions will be replicated non-stop until the logs are full.

But the best solution for two-way replication would not be to bother at all with SQL Server transactional replication, at least for 2000. You could try merge replication, or set up a web service to allow data only to be written to a master db and replicated back to the child databases. Two-way replication is a bad idea.

May 26, 2010

.Net based HTTP Client in ColdFusion?!

Filed under: .net, ColdFusion — joel.cass @ 5:12 pm

I’ve been banging my head up against the metaphorical walls around here for ages trying to get ColdFusion to access websites via a proxy server that only supports NTLM authentication.

Short answer: don’t bother. CFHTTP does not support NTLM Authentication. Most of the Java libraries claiming to do so are hopeless. Support is inconsistent because no-one knows anything about the standard. Except Microsoft.

So, it only came naturally that the best way to solve the issue would be to use .net – and now that ColdFusion has a gateway to .net components, I could actually write something that solves the problem!

So, what I have done is written a wrapper that can be accessed by ColdFusion, and a simple custom tag to finish it off.

Some more information regarding download and implementation is in the Projects section.

Older Posts »